Test Report: KVM_Linux 16144

                    
                      5a8f8cb541418da3ae1b3ffd9c263e271e7d084b:2023-03-31:28590
                    
                

Test fail (3/312)

Order failed test Duration
258 TestPause/serial/SecondStartNoReconfiguration 95.4
261 TestNoKubernetes/serial/StartWithK8s 38.5
263 TestNoKubernetes/serial/StartWithStopK8s 22.51
x
+
TestPause/serial/SecondStartNoReconfiguration (95.4s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-939189 --alsologtostderr -v=1 --driver=kvm2 
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-939189 --alsologtostderr -v=1 --driver=kvm2 : (1m31.447901144s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-939189] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16144
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16144-3494/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16144-3494/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting control plane node pause-939189 in cluster pause-939189
	* Updating the running kvm2 "pause-939189" VM ...
	* Preparing Kubernetes v1.26.3 on Docker 20.10.23 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-939189" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0331 18:03:25.711833   32536 out.go:296] Setting OutFile to fd 1 ...
	I0331 18:03:25.712008   32536 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0331 18:03:25.712026   32536 out.go:309] Setting ErrFile to fd 2...
	I0331 18:03:25.712033   32536 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0331 18:03:25.712166   32536 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16144-3494/.minikube/bin
	I0331 18:03:25.712806   32536 out.go:303] Setting JSON to false
	I0331 18:03:25.713974   32536 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":2757,"bootTime":1680283049,"procs":251,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1031-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0331 18:03:25.714062   32536 start.go:135] virtualization: kvm guest
	I0331 18:03:25.717124   32536 out.go:177] * [pause-939189] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	I0331 18:03:25.718745   32536 notify.go:220] Checking for updates...
	I0331 18:03:25.718754   32536 out.go:177]   - MINIKUBE_LOCATION=16144
	I0331 18:03:25.720301   32536 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0331 18:03:25.721911   32536 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16144-3494/kubeconfig
	I0331 18:03:25.723493   32536 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16144-3494/.minikube
	I0331 18:03:25.725094   32536 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0331 18:03:25.726699   32536 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0331 18:03:25.728858   32536 config.go:182] Loaded profile config "pause-939189": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.3
	I0331 18:03:25.729256   32536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0331 18:03:25.729306   32536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0331 18:03:25.747285   32536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45791
	I0331 18:03:25.747792   32536 main.go:141] libmachine: () Calling .GetVersion
	I0331 18:03:25.748496   32536 main.go:141] libmachine: Using API Version  1
	I0331 18:03:25.748525   32536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0331 18:03:25.749043   32536 main.go:141] libmachine: () Calling .GetMachineName
	I0331 18:03:25.749253   32536 main.go:141] libmachine: (pause-939189) Calling .DriverName
	I0331 18:03:25.749440   32536 driver.go:365] Setting default libvirt URI to qemu:///system
	I0331 18:03:25.749869   32536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0331 18:03:25.749913   32536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0331 18:03:25.769314   32536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35735
	I0331 18:03:25.769804   32536 main.go:141] libmachine: () Calling .GetVersion
	I0331 18:03:25.770315   32536 main.go:141] libmachine: Using API Version  1
	I0331 18:03:25.770364   32536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0331 18:03:25.770719   32536 main.go:141] libmachine: () Calling .GetMachineName
	I0331 18:03:25.770905   32536 main.go:141] libmachine: (pause-939189) Calling .DriverName
	I0331 18:03:25.813738   32536 out.go:177] * Using the kvm2 driver based on existing profile
	I0331 18:03:25.815408   32536 start.go:295] selected driver: kvm2
	I0331 18:03:25.815426   32536 start.go:859] validating driver "kvm2" against &{Name:pause-939189 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16191/minikube-v1.29.0-1680115329-16191-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.3 C
lusterName:pause-939189 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.142 Port:8443 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer
:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0331 18:03:25.815625   32536 start.go:870] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0331 18:03:25.816023   32536 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0331 18:03:25.816128   32536 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16144-3494/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0331 18:03:25.833164   32536 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.29.0
	I0331 18:03:25.833976   32536 cni.go:84] Creating CNI manager for ""
	I0331 18:03:25.834011   32536 cni.go:157] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0331 18:03:25.834024   32536 start_flags.go:319] config:
	{Name:pause-939189 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16191/minikube-v1.29.0-1680115329-16191-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.3 ClusterName:pause-939189 Namespace:default APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.142 Port:8443 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-prov
isioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0331 18:03:25.834220   32536 iso.go:125] acquiring lock: {Name:mk48583bcdf05c8e72651ed56790356a32c028b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0331 18:03:25.836510   32536 out.go:177] * Starting control plane node pause-939189 in cluster pause-939189
	I0331 18:03:25.837952   32536 preload.go:132] Checking if preload exists for k8s version v1.26.3 and runtime docker
	I0331 18:03:25.838005   32536 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16144-3494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.3-docker-overlay2-amd64.tar.lz4
	I0331 18:03:25.838024   32536 cache.go:57] Caching tarball of preloaded images
	I0331 18:03:25.838124   32536 preload.go:174] Found /home/jenkins/minikube-integration/16144-3494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0331 18:03:25.838137   32536 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.3 on docker
	I0331 18:03:25.838332   32536 profile.go:148] Saving config to /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/pause-939189/config.json ...
	I0331 18:03:25.838550   32536 cache.go:193] Successfully downloaded all kic artifacts
	I0331 18:03:25.838577   32536 start.go:364] acquiring machines lock for pause-939189: {Name:mkfdc5208de17d93700ea90324b4f36214eab469 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0331 18:03:40.264580   32536 start.go:368] acquired machines lock for "pause-939189" in 14.425951672s
	I0331 18:03:40.264632   32536 start.go:96] Skipping create...Using existing machine configuration
	I0331 18:03:40.264640   32536 fix.go:55] fixHost starting: 
	I0331 18:03:40.265105   32536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0331 18:03:40.265146   32536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0331 18:03:40.284631   32536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41471
	I0331 18:03:40.285088   32536 main.go:141] libmachine: () Calling .GetVersion
	I0331 18:03:40.285618   32536 main.go:141] libmachine: Using API Version  1
	I0331 18:03:40.285642   32536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0331 18:03:40.285948   32536 main.go:141] libmachine: () Calling .GetMachineName
	I0331 18:03:40.286159   32536 main.go:141] libmachine: (pause-939189) Calling .DriverName
	I0331 18:03:40.286413   32536 main.go:141] libmachine: (pause-939189) Calling .GetState
	I0331 18:03:40.288318   32536 fix.go:103] recreateIfNeeded on pause-939189: state=Running err=<nil>
	W0331 18:03:40.288341   32536 fix.go:129] unexpected machine state, will restart: <nil>
	I0331 18:03:40.292995   32536 out.go:177] * Updating the running kvm2 "pause-939189" VM ...
	I0331 18:03:40.294650   32536 machine.go:88] provisioning docker machine ...
	I0331 18:03:40.294679   32536 main.go:141] libmachine: (pause-939189) Calling .DriverName
	I0331 18:03:40.294921   32536 main.go:141] libmachine: (pause-939189) Calling .GetMachineName
	I0331 18:03:40.295097   32536 buildroot.go:166] provisioning hostname "pause-939189"
	I0331 18:03:40.295117   32536 main.go:141] libmachine: (pause-939189) Calling .GetMachineName
	I0331 18:03:40.295290   32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHHostname
	I0331 18:03:40.298785   32536 main.go:141] libmachine: (pause-939189) DBG | domain pause-939189 has defined MAC address 52:54:00:80:0d:fe in network mk-pause-939189
	I0331 18:03:40.299195   32536 main.go:141] libmachine: (pause-939189) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:0d:fe", ip: ""} in network mk-pause-939189: {Iface:virbr1 ExpiryTime:2023-03-31 19:02:23 +0000 UTC Type:0 Mac:52:54:00:80:0d:fe Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:pause-939189 Clientid:01:52:54:00:80:0d:fe}
	I0331 18:03:40.299228   32536 main.go:141] libmachine: (pause-939189) DBG | domain pause-939189 has defined IP address 192.168.39.142 and MAC address 52:54:00:80:0d:fe in network mk-pause-939189
	I0331 18:03:40.299474   32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHPort
	I0331 18:03:40.299722   32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHKeyPath
	I0331 18:03:40.299872   32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHKeyPath
	I0331 18:03:40.300020   32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHUsername
	I0331 18:03:40.300164   32536 main.go:141] libmachine: Using SSH client type: native
	I0331 18:03:40.300581   32536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e3e0] 0x811480 <nil>  [] 0s} 192.168.39.142 22 <nil> <nil>}
	I0331 18:03:40.300595   32536 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-939189 && echo "pause-939189" | sudo tee /etc/hostname
	I0331 18:03:40.446699   32536 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-939189
	
	I0331 18:03:40.446732   32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHHostname
	I0331 18:03:40.450226   32536 main.go:141] libmachine: (pause-939189) DBG | domain pause-939189 has defined MAC address 52:54:00:80:0d:fe in network mk-pause-939189
	I0331 18:03:40.450649   32536 main.go:141] libmachine: (pause-939189) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:0d:fe", ip: ""} in network mk-pause-939189: {Iface:virbr1 ExpiryTime:2023-03-31 19:02:23 +0000 UTC Type:0 Mac:52:54:00:80:0d:fe Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:pause-939189 Clientid:01:52:54:00:80:0d:fe}
	I0331 18:03:40.450686   32536 main.go:141] libmachine: (pause-939189) DBG | domain pause-939189 has defined IP address 192.168.39.142 and MAC address 52:54:00:80:0d:fe in network mk-pause-939189
	I0331 18:03:40.450929   32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHPort
	I0331 18:03:40.451154   32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHKeyPath
	I0331 18:03:40.451364   32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHKeyPath
	I0331 18:03:40.451533   32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHUsername
	I0331 18:03:40.451710   32536 main.go:141] libmachine: Using SSH client type: native
	I0331 18:03:40.452300   32536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e3e0] 0x811480 <nil>  [] 0s} 192.168.39.142 22 <nil> <nil>}
	I0331 18:03:40.452330   32536 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-939189' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-939189/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-939189' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0331 18:03:40.582080   32536 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0331 18:03:40.582121   32536 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16144-3494/.minikube CaCertPath:/home/jenkins/minikube-integration/16144-3494/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16144-3494/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16144-3494/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16144-3494/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16144-3494/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16144-3494/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16144-3494/.minikube}
	I0331 18:03:40.582242   32536 buildroot.go:174] setting up certificates
	I0331 18:03:40.582281   32536 provision.go:83] configureAuth start
	I0331 18:03:40.582305   32536 main.go:141] libmachine: (pause-939189) Calling .GetMachineName
	I0331 18:03:40.582633   32536 main.go:141] libmachine: (pause-939189) Calling .GetIP
	I0331 18:03:40.587018   32536 main.go:141] libmachine: (pause-939189) DBG | domain pause-939189 has defined MAC address 52:54:00:80:0d:fe in network mk-pause-939189
	I0331 18:03:40.587650   32536 main.go:141] libmachine: (pause-939189) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:0d:fe", ip: ""} in network mk-pause-939189: {Iface:virbr1 ExpiryTime:2023-03-31 19:02:23 +0000 UTC Type:0 Mac:52:54:00:80:0d:fe Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:pause-939189 Clientid:01:52:54:00:80:0d:fe}
	I0331 18:03:40.587681   32536 main.go:141] libmachine: (pause-939189) DBG | domain pause-939189 has defined IP address 192.168.39.142 and MAC address 52:54:00:80:0d:fe in network mk-pause-939189
	I0331 18:03:40.588077   32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHHostname
	I0331 18:03:40.598852   32536 main.go:141] libmachine: (pause-939189) DBG | domain pause-939189 has defined MAC address 52:54:00:80:0d:fe in network mk-pause-939189
	I0331 18:03:40.599660   32536 main.go:141] libmachine: (pause-939189) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:0d:fe", ip: ""} in network mk-pause-939189: {Iface:virbr1 ExpiryTime:2023-03-31 19:02:23 +0000 UTC Type:0 Mac:52:54:00:80:0d:fe Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:pause-939189 Clientid:01:52:54:00:80:0d:fe}
	I0331 18:03:40.599795   32536 main.go:141] libmachine: (pause-939189) DBG | domain pause-939189 has defined IP address 192.168.39.142 and MAC address 52:54:00:80:0d:fe in network mk-pause-939189
	I0331 18:03:40.600229   32536 provision.go:138] copyHostCerts
	I0331 18:03:40.600299   32536 exec_runner.go:144] found /home/jenkins/minikube-integration/16144-3494/.minikube/cert.pem, removing ...
	I0331 18:03:40.600312   32536 exec_runner.go:207] rm: /home/jenkins/minikube-integration/16144-3494/.minikube/cert.pem
	I0331 18:03:40.600381   32536 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16144-3494/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16144-3494/.minikube/cert.pem (1123 bytes)
	I0331 18:03:40.600543   32536 exec_runner.go:144] found /home/jenkins/minikube-integration/16144-3494/.minikube/key.pem, removing ...
	I0331 18:03:40.600551   32536 exec_runner.go:207] rm: /home/jenkins/minikube-integration/16144-3494/.minikube/key.pem
	I0331 18:03:40.600587   32536 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16144-3494/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16144-3494/.minikube/key.pem (1679 bytes)
	I0331 18:03:40.600675   32536 exec_runner.go:144] found /home/jenkins/minikube-integration/16144-3494/.minikube/ca.pem, removing ...
	I0331 18:03:40.600681   32536 exec_runner.go:207] rm: /home/jenkins/minikube-integration/16144-3494/.minikube/ca.pem
	I0331 18:03:40.600708   32536 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16144-3494/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16144-3494/.minikube/ca.pem (1078 bytes)
	I0331 18:03:40.600770   32536 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16144-3494/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16144-3494/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16144-3494/.minikube/certs/ca-key.pem org=jenkins.pause-939189 san=[192.168.39.142 192.168.39.142 localhost 127.0.0.1 minikube pause-939189]
	I0331 18:03:40.860159   32536 provision.go:172] copyRemoteCerts
	I0331 18:03:40.860253   32536 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0331 18:03:40.860291   32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHHostname
	I0331 18:03:40.864535   32536 main.go:141] libmachine: (pause-939189) DBG | domain pause-939189 has defined MAC address 52:54:00:80:0d:fe in network mk-pause-939189
	I0331 18:03:40.865012   32536 main.go:141] libmachine: (pause-939189) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:0d:fe", ip: ""} in network mk-pause-939189: {Iface:virbr1 ExpiryTime:2023-03-31 19:02:23 +0000 UTC Type:0 Mac:52:54:00:80:0d:fe Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:pause-939189 Clientid:01:52:54:00:80:0d:fe}
	I0331 18:03:40.865057   32536 main.go:141] libmachine: (pause-939189) DBG | domain pause-939189 has defined IP address 192.168.39.142 and MAC address 52:54:00:80:0d:fe in network mk-pause-939189
	I0331 18:03:40.865401   32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHPort
	I0331 18:03:40.865633   32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHKeyPath
	I0331 18:03:40.865835   32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHUsername
	I0331 18:03:40.866014   32536 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16144-3494/.minikube/machines/pause-939189/id_rsa Username:docker}
	I0331 18:03:40.969464   32536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16144-3494/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0331 18:03:41.034639   32536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16144-3494/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0331 18:03:41.070577   32536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16144-3494/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0331 18:03:41.113651   32536 provision.go:86] duration metric: configureAuth took 531.350646ms
	I0331 18:03:41.113705   32536 buildroot.go:189] setting minikube options for container-runtime
	I0331 18:03:41.113981   32536 config.go:182] Loaded profile config "pause-939189": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.3
	I0331 18:03:41.114013   32536 main.go:141] libmachine: (pause-939189) Calling .DriverName
	I0331 18:03:41.115580   32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHHostname
	I0331 18:03:41.119107   32536 main.go:141] libmachine: (pause-939189) DBG | domain pause-939189 has defined MAC address 52:54:00:80:0d:fe in network mk-pause-939189
	I0331 18:03:41.119579   32536 main.go:141] libmachine: (pause-939189) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:0d:fe", ip: ""} in network mk-pause-939189: {Iface:virbr1 ExpiryTime:2023-03-31 19:02:23 +0000 UTC Type:0 Mac:52:54:00:80:0d:fe Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:pause-939189 Clientid:01:52:54:00:80:0d:fe}
	I0331 18:03:41.119615   32536 main.go:141] libmachine: (pause-939189) DBG | domain pause-939189 has defined IP address 192.168.39.142 and MAC address 52:54:00:80:0d:fe in network mk-pause-939189
	I0331 18:03:41.120112   32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHPort
	I0331 18:03:41.120296   32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHKeyPath
	I0331 18:03:41.120454   32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHKeyPath
	I0331 18:03:41.120602   32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHUsername
	I0331 18:03:41.120761   32536 main.go:141] libmachine: Using SSH client type: native
	I0331 18:03:41.121332   32536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e3e0] 0x811480 <nil>  [] 0s} 192.168.39.142 22 <nil> <nil>}
	I0331 18:03:41.121346   32536 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0331 18:03:41.283583   32536 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0331 18:03:41.283617   32536 buildroot.go:70] root file system type: tmpfs
	I0331 18:03:41.283796   32536 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0331 18:03:41.283838   32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHHostname
	I0331 18:03:41.287411   32536 main.go:141] libmachine: (pause-939189) DBG | domain pause-939189 has defined MAC address 52:54:00:80:0d:fe in network mk-pause-939189
	I0331 18:03:41.287886   32536 main.go:141] libmachine: (pause-939189) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:0d:fe", ip: ""} in network mk-pause-939189: {Iface:virbr1 ExpiryTime:2023-03-31 19:02:23 +0000 UTC Type:0 Mac:52:54:00:80:0d:fe Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:pause-939189 Clientid:01:52:54:00:80:0d:fe}
	I0331 18:03:41.287925   32536 main.go:141] libmachine: (pause-939189) DBG | domain pause-939189 has defined IP address 192.168.39.142 and MAC address 52:54:00:80:0d:fe in network mk-pause-939189
	I0331 18:03:41.288483   32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHPort
	I0331 18:03:41.288709   32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHKeyPath
	I0331 18:03:41.288961   32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHKeyPath
	I0331 18:03:41.289148   32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHUsername
	I0331 18:03:41.289395   32536 main.go:141] libmachine: Using SSH client type: native
	I0331 18:03:41.289940   32536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e3e0] 0x811480 <nil>  [] 0s} 192.168.39.142 22 <nil> <nil>}
	I0331 18:03:41.290035   32536 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0331 18:03:41.461458   32536 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0331 18:03:41.461497   32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHHostname
	I0331 18:03:41.464975   32536 main.go:141] libmachine: (pause-939189) DBG | domain pause-939189 has defined MAC address 52:54:00:80:0d:fe in network mk-pause-939189
	I0331 18:03:41.465415   32536 main.go:141] libmachine: (pause-939189) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:0d:fe", ip: ""} in network mk-pause-939189: {Iface:virbr1 ExpiryTime:2023-03-31 19:02:23 +0000 UTC Type:0 Mac:52:54:00:80:0d:fe Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:pause-939189 Clientid:01:52:54:00:80:0d:fe}
	I0331 18:03:41.465442   32536 main.go:141] libmachine: (pause-939189) DBG | domain pause-939189 has defined IP address 192.168.39.142 and MAC address 52:54:00:80:0d:fe in network mk-pause-939189
	I0331 18:03:41.465895   32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHPort
	I0331 18:03:41.466145   32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHKeyPath
	I0331 18:03:41.466339   32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHKeyPath
	I0331 18:03:41.466475   32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHUsername
	I0331 18:03:41.466670   32536 main.go:141] libmachine: Using SSH client type: native
	I0331 18:03:41.467276   32536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e3e0] 0x811480 <nil>  [] 0s} 192.168.39.142 22 <nil> <nil>}
	I0331 18:03:41.467308   32536 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0331 18:03:41.624909   32536 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0331 18:03:41.624937   32536 machine.go:91] provisioned docker machine in 1.330271176s
	I0331 18:03:41.624961   32536 start.go:300] post-start starting for "pause-939189" (driver="kvm2")
	I0331 18:03:41.624970   32536 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0331 18:03:41.624996   32536 main.go:141] libmachine: (pause-939189) Calling .DriverName
	I0331 18:03:41.625358   32536 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0331 18:03:41.625392   32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHHostname
	I0331 18:03:41.629902   32536 main.go:141] libmachine: (pause-939189) DBG | domain pause-939189 has defined MAC address 52:54:00:80:0d:fe in network mk-pause-939189
	I0331 18:03:41.630339   32536 main.go:141] libmachine: (pause-939189) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:0d:fe", ip: ""} in network mk-pause-939189: {Iface:virbr1 ExpiryTime:2023-03-31 19:02:23 +0000 UTC Type:0 Mac:52:54:00:80:0d:fe Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:pause-939189 Clientid:01:52:54:00:80:0d:fe}
	I0331 18:03:41.630372   32536 main.go:141] libmachine: (pause-939189) DBG | domain pause-939189 has defined IP address 192.168.39.142 and MAC address 52:54:00:80:0d:fe in network mk-pause-939189
	I0331 18:03:41.630727   32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHPort
	I0331 18:03:41.630956   32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHKeyPath
	I0331 18:03:41.631134   32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHUsername
	I0331 18:03:41.631289   32536 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16144-3494/.minikube/machines/pause-939189/id_rsa Username:docker}
	I0331 18:03:41.759900   32536 ssh_runner.go:195] Run: cat /etc/os-release
	I0331 18:03:41.776514   32536 info.go:137] Remote host: Buildroot 2021.02.12
	I0331 18:03:41.776548   32536 filesync.go:126] Scanning /home/jenkins/minikube-integration/16144-3494/.minikube/addons for local assets ...
	I0331 18:03:41.776627   32536 filesync.go:126] Scanning /home/jenkins/minikube-integration/16144-3494/.minikube/files for local assets ...
	I0331 18:03:41.776731   32536 filesync.go:149] local asset: /home/jenkins/minikube-integration/16144-3494/.minikube/files/etc/ssl/certs/105402.pem -> 105402.pem in /etc/ssl/certs
	I0331 18:03:41.776862   32536 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0331 18:03:41.790408   32536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16144-3494/.minikube/files/etc/ssl/certs/105402.pem --> /etc/ssl/certs/105402.pem (1708 bytes)
	I0331 18:03:41.835349   32536 start.go:303] post-start completed in 210.36981ms
	I0331 18:03:41.835375   32536 fix.go:57] fixHost completed within 1.570735042s
	I0331 18:03:41.835400   32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHHostname
	I0331 18:03:41.838925   32536 main.go:141] libmachine: (pause-939189) DBG | domain pause-939189 has defined MAC address 52:54:00:80:0d:fe in network mk-pause-939189
	I0331 18:03:41.839492   32536 main.go:141] libmachine: (pause-939189) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:0d:fe", ip: ""} in network mk-pause-939189: {Iface:virbr1 ExpiryTime:2023-03-31 19:02:23 +0000 UTC Type:0 Mac:52:54:00:80:0d:fe Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:pause-939189 Clientid:01:52:54:00:80:0d:fe}
	I0331 18:03:41.839523   32536 main.go:141] libmachine: (pause-939189) DBG | domain pause-939189 has defined IP address 192.168.39.142 and MAC address 52:54:00:80:0d:fe in network mk-pause-939189
	I0331 18:03:41.839837   32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHPort
	I0331 18:03:41.840052   32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHKeyPath
	I0331 18:03:41.840238   32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHKeyPath
	I0331 18:03:41.840382   32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHUsername
	I0331 18:03:41.840575   32536 main.go:141] libmachine: Using SSH client type: native
	I0331 18:03:41.841179   32536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e3e0] 0x811480 <nil>  [] 0s} 192.168.39.142 22 <nil> <nil>}
	I0331 18:03:41.841201   32536 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0331 18:03:41.993866   32536 main.go:141] libmachine: SSH cmd err, output: <nil>: 1680285821.989568532
	
	I0331 18:03:41.993892   32536 fix.go:207] guest clock: 1680285821.989568532
	I0331 18:03:41.993903   32536 fix.go:220] Guest: 2023-03-31 18:03:41.989568532 +0000 UTC Remote: 2023-03-31 18:03:41.835379949 +0000 UTC m=+16.167388203 (delta=154.188583ms)
	I0331 18:03:41.993945   32536 fix.go:191] guest clock delta is within tolerance: 154.188583ms
	I0331 18:03:41.993956   32536 start.go:83] releasing machines lock for "pause-939189", held for 1.729345955s
	I0331 18:03:41.993982   32536 main.go:141] libmachine: (pause-939189) Calling .DriverName
	I0331 18:03:41.994291   32536 main.go:141] libmachine: (pause-939189) Calling .GetIP
	I0331 18:03:41.997554   32536 main.go:141] libmachine: (pause-939189) DBG | domain pause-939189 has defined MAC address 52:54:00:80:0d:fe in network mk-pause-939189
	I0331 18:03:41.998095   32536 main.go:141] libmachine: (pause-939189) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:0d:fe", ip: ""} in network mk-pause-939189: {Iface:virbr1 ExpiryTime:2023-03-31 19:02:23 +0000 UTC Type:0 Mac:52:54:00:80:0d:fe Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:pause-939189 Clientid:01:52:54:00:80:0d:fe}
	I0331 18:03:41.998131   32536 main.go:141] libmachine: (pause-939189) DBG | domain pause-939189 has defined IP address 192.168.39.142 and MAC address 52:54:00:80:0d:fe in network mk-pause-939189
	I0331 18:03:41.998487   32536 main.go:141] libmachine: (pause-939189) Calling .DriverName
	I0331 18:03:41.999887   32536 main.go:141] libmachine: (pause-939189) Calling .DriverName
	I0331 18:03:42.000164   32536 main.go:141] libmachine: (pause-939189) Calling .DriverName
	I0331 18:03:42.000253   32536 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0331 18:03:42.000300   32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHHostname
	I0331 18:03:42.000726   32536 ssh_runner.go:195] Run: cat /version.json
	I0331 18:03:42.000773   32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHHostname
	I0331 18:03:42.004537   32536 main.go:141] libmachine: (pause-939189) DBG | domain pause-939189 has defined MAC address 52:54:00:80:0d:fe in network mk-pause-939189
	I0331 18:03:42.005631   32536 main.go:141] libmachine: (pause-939189) DBG | domain pause-939189 has defined MAC address 52:54:00:80:0d:fe in network mk-pause-939189
	I0331 18:03:42.006143   32536 main.go:141] libmachine: (pause-939189) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:0d:fe", ip: ""} in network mk-pause-939189: {Iface:virbr1 ExpiryTime:2023-03-31 19:02:23 +0000 UTC Type:0 Mac:52:54:00:80:0d:fe Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:pause-939189 Clientid:01:52:54:00:80:0d:fe}
	I0331 18:03:42.006178   32536 main.go:141] libmachine: (pause-939189) DBG | domain pause-939189 has defined IP address 192.168.39.142 and MAC address 52:54:00:80:0d:fe in network mk-pause-939189
	I0331 18:03:42.006623   32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHPort
	I0331 18:03:42.006868   32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHKeyPath
	I0331 18:03:42.007059   32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHUsername
	I0331 18:03:42.007123   32536 main.go:141] libmachine: (pause-939189) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:0d:fe", ip: ""} in network mk-pause-939189: {Iface:virbr1 ExpiryTime:2023-03-31 19:02:23 +0000 UTC Type:0 Mac:52:54:00:80:0d:fe Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:pause-939189 Clientid:01:52:54:00:80:0d:fe}
	I0331 18:03:42.007140   32536 main.go:141] libmachine: (pause-939189) DBG | domain pause-939189 has defined IP address 192.168.39.142 and MAC address 52:54:00:80:0d:fe in network mk-pause-939189
	I0331 18:03:42.007276   32536 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16144-3494/.minikube/machines/pause-939189/id_rsa Username:docker}
	I0331 18:03:42.008030   32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHPort
	I0331 18:03:42.008201   32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHKeyPath
	I0331 18:03:42.008351   32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHUsername
	I0331 18:03:42.008558   32536 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16144-3494/.minikube/machines/pause-939189/id_rsa Username:docker}
	I0331 18:03:42.135551   32536 ssh_runner.go:195] Run: systemctl --version
	I0331 18:03:42.144080   32536 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0331 18:03:42.152644   32536 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0331 18:03:42.152727   32536 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0331 18:03:42.167739   32536 cni.go:258] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0331 18:03:42.167766   32536 preload.go:132] Checking if preload exists for k8s version v1.26.3 and runtime docker
	I0331 18:03:42.167860   32536 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0331 18:03:42.215918   32536 docker.go:639] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.26.3
	registry.k8s.io/kube-controller-manager:v1.26.3
	registry.k8s.io/kube-scheduler:v1.26.3
	registry.k8s.io/kube-proxy:v1.26.3
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0331 18:03:42.215946   32536 docker.go:569] Images already preloaded, skipping extraction
	I0331 18:03:42.215958   32536 start.go:481] detecting cgroup driver to use...
	I0331 18:03:42.216072   32536 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0331 18:03:42.242406   32536 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0331 18:03:42.257253   32536 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0331 18:03:42.277189   32536 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0331 18:03:42.277247   32536 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0331 18:03:42.292009   32536 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0331 18:03:42.305328   32536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0331 18:03:42.319260   32536 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0331 18:03:42.332370   32536 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0331 18:03:42.344253   32536 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0331 18:03:42.355124   32536 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0331 18:03:42.367913   32536 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0331 18:03:42.378810   32536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0331 18:03:42.566800   32536 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0331 18:03:42.599727   32536 start.go:481] detecting cgroup driver to use...
	I0331 18:03:42.599836   32536 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0331 18:03:42.622523   32536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0331 18:03:42.644613   32536 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0331 18:03:42.673317   32536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0331 18:03:42.692314   32536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0331 18:03:42.714726   32536 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0331 18:03:42.742474   32536 ssh_runner.go:195] Run: which cri-dockerd
	I0331 18:03:42.748476   32536 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0331 18:03:42.761011   32536 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0331 18:03:42.787174   32536 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0331 18:03:43.008370   32536 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0331 18:03:43.207569   32536 docker.go:538] configuring docker to use "cgroupfs" as cgroup driver...
	I0331 18:03:43.207603   32536 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0331 18:03:43.230378   32536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0331 18:03:43.434598   32536 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0331 18:03:54.846657   32536 ssh_runner.go:235] Completed: sudo systemctl restart docker: (11.412025511s)
	I0331 18:03:54.847101   32536 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0331 18:03:54.987722   32536 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0331 18:03:55.158813   32536 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0331 18:03:55.315724   32536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0331 18:03:55.501954   32536 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0331 18:03:55.539554   32536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0331 18:03:55.719919   32536 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0331 18:03:56.206087   32536 start.go:528] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0331 18:03:56.206166   32536 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0331 18:03:56.220258   32536 start.go:549] Will wait 60s for crictl version
	I0331 18:03:56.220332   32536 ssh_runner.go:195] Run: which crictl
	I0331 18:03:56.228546   32536 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0331 18:03:56.421849   32536 start.go:565] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.23
	RuntimeApiVersion:  v1alpha2
	I0331 18:03:56.421930   32536 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0331 18:03:56.482352   32536 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0331 18:03:56.553221   32536 out.go:204] * Preparing Kubernetes v1.26.3 on Docker 20.10.23 ...
	I0331 18:03:56.553294   32536 main.go:141] libmachine: (pause-939189) Calling .GetIP
	I0331 18:03:56.556558   32536 main.go:141] libmachine: (pause-939189) DBG | domain pause-939189 has defined MAC address 52:54:00:80:0d:fe in network mk-pause-939189
	I0331 18:03:56.556972   32536 main.go:141] libmachine: (pause-939189) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:0d:fe", ip: ""} in network mk-pause-939189: {Iface:virbr1 ExpiryTime:2023-03-31 19:02:23 +0000 UTC Type:0 Mac:52:54:00:80:0d:fe Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:pause-939189 Clientid:01:52:54:00:80:0d:fe}
	I0331 18:03:56.557002   32536 main.go:141] libmachine: (pause-939189) DBG | domain pause-939189 has defined IP address 192.168.39.142 and MAC address 52:54:00:80:0d:fe in network mk-pause-939189
	I0331 18:03:56.557359   32536 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0331 18:03:56.561797   32536 preload.go:132] Checking if preload exists for k8s version v1.26.3 and runtime docker
	I0331 18:03:56.561869   32536 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0331 18:03:56.615248   32536 docker.go:639] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.26.3
	registry.k8s.io/kube-controller-manager:v1.26.3
	registry.k8s.io/kube-scheduler:v1.26.3
	registry.k8s.io/kube-proxy:v1.26.3
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0331 18:03:56.615280   32536 docker.go:569] Images already preloaded, skipping extraction
	I0331 18:03:56.615355   32536 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0331 18:03:56.658918   32536 docker.go:639] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.26.3
	registry.k8s.io/kube-controller-manager:v1.26.3
	registry.k8s.io/kube-scheduler:v1.26.3
	registry.k8s.io/kube-proxy:v1.26.3
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0331 18:03:56.658945   32536 cache_images.go:84] Images are preloaded, skipping loading
	I0331 18:03:56.659011   32536 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0331 18:03:56.745659   32536 cni.go:84] Creating CNI manager for ""
	I0331 18:03:56.745691   32536 cni.go:157] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0331 18:03:56.745704   32536 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0331 18:03:56.745724   32536 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.142 APIServerPort:8443 KubernetesVersion:v1.26.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-939189 NodeName:pause-939189 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.142"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.142 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0331 18:03:56.745910   32536 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.142
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "pause-939189"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.142
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.142"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0331 18:03:56.745991   32536 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=pause-939189 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.142
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.3 ClusterName:pause-939189 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0331 18:03:56.746062   32536 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.3
	I0331 18:03:56.761126   32536 binaries.go:44] Found k8s binaries, skipping transfer
	I0331 18:03:56.761203   32536 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0331 18:03:56.780757   32536 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0331 18:03:56.818141   32536 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0331 18:03:56.854842   32536 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I0331 18:03:56.929502   32536 ssh_runner.go:195] Run: grep 192.168.39.142	control-plane.minikube.internal$ /etc/hosts
	I0331 18:03:56.936882   32536 certs.go:56] Setting up /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/pause-939189 for IP: 192.168.39.142
	I0331 18:03:56.936926   32536 certs.go:186] acquiring lock for shared ca certs: {Name:mk5b2b979756b4a682c5be81dc53f006bb9a7a01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0331 18:03:56.937093   32536 certs.go:195] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16144-3494/.minikube/ca.key
	I0331 18:03:56.937164   32536 certs.go:195] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16144-3494/.minikube/proxy-client-ca.key
	I0331 18:03:56.937292   32536 certs.go:311] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/pause-939189/client.key
	I0331 18:03:56.937377   32536 certs.go:311] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/pause-939189/apiserver.key.4bb0a69b
	I0331 18:03:56.937427   32536 certs.go:311] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/pause-939189/proxy-client.key
	I0331 18:03:56.937560   32536 certs.go:401] found cert: /home/jenkins/minikube-integration/16144-3494/.minikube/certs/home/jenkins/minikube-integration/16144-3494/.minikube/certs/10540.pem (1338 bytes)
	W0331 18:03:56.937597   32536 certs.go:397] ignoring /home/jenkins/minikube-integration/16144-3494/.minikube/certs/home/jenkins/minikube-integration/16144-3494/.minikube/certs/10540_empty.pem, impossibly tiny 0 bytes
	I0331 18:03:56.937611   32536 certs.go:401] found cert: /home/jenkins/minikube-integration/16144-3494/.minikube/certs/home/jenkins/minikube-integration/16144-3494/.minikube/certs/ca-key.pem (1675 bytes)
	I0331 18:03:56.937646   32536 certs.go:401] found cert: /home/jenkins/minikube-integration/16144-3494/.minikube/certs/home/jenkins/minikube-integration/16144-3494/.minikube/certs/ca.pem (1078 bytes)
	I0331 18:03:56.937677   32536 certs.go:401] found cert: /home/jenkins/minikube-integration/16144-3494/.minikube/certs/home/jenkins/minikube-integration/16144-3494/.minikube/certs/cert.pem (1123 bytes)
	I0331 18:03:56.937706   32536 certs.go:401] found cert: /home/jenkins/minikube-integration/16144-3494/.minikube/certs/home/jenkins/minikube-integration/16144-3494/.minikube/certs/key.pem (1679 bytes)
	I0331 18:03:56.937759   32536 certs.go:401] found cert: /home/jenkins/minikube-integration/16144-3494/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16144-3494/.minikube/files/etc/ssl/certs/105402.pem (1708 bytes)
	I0331 18:03:56.938525   32536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/pause-939189/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0331 18:03:56.979110   32536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/pause-939189/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0331 18:03:57.063983   32536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/pause-939189/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0331 18:03:57.122723   32536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/pause-939189/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0331 18:03:57.163289   32536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16144-3494/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0331 18:03:57.222469   32536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16144-3494/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0331 18:03:57.290463   32536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16144-3494/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0331 18:03:57.348893   32536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16144-3494/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0331 18:03:57.411932   32536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16144-3494/.minikube/files/etc/ssl/certs/105402.pem --> /usr/share/ca-certificates/105402.pem (1708 bytes)
	I0331 18:03:57.482311   32536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16144-3494/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0331 18:03:57.559793   32536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16144-3494/.minikube/certs/10540.pem --> /usr/share/ca-certificates/10540.pem (1338 bytes)
	I0331 18:03:57.611524   32536 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0331 18:03:57.645170   32536 ssh_runner.go:195] Run: openssl version
	I0331 18:03:57.654996   32536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0331 18:03:57.667216   32536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0331 18:03:57.672747   32536 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Mar 31 17:21 /usr/share/ca-certificates/minikubeCA.pem
	I0331 18:03:57.672820   32536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0331 18:03:57.680852   32536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0331 18:03:57.710339   32536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10540.pem && ln -fs /usr/share/ca-certificates/10540.pem /etc/ssl/certs/10540.pem"
	I0331 18:03:57.723580   32536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10540.pem
	I0331 18:03:57.734606   32536 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Mar 31 17:26 /usr/share/ca-certificates/10540.pem
	I0331 18:03:57.734679   32536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10540.pem
	I0331 18:03:57.753538   32536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10540.pem /etc/ssl/certs/51391683.0"
	I0331 18:03:57.770397   32536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/105402.pem && ln -fs /usr/share/ca-certificates/105402.pem /etc/ssl/certs/105402.pem"
	I0331 18:03:57.807350   32536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/105402.pem
	I0331 18:03:57.833625   32536 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Mar 31 17:26 /usr/share/ca-certificates/105402.pem
	I0331 18:03:57.833709   32536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/105402.pem
	I0331 18:03:57.848933   32536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/105402.pem /etc/ssl/certs/3ec20f2e.0"
	I0331 18:03:57.918200   32536 kubeadm.go:401] StartCluster: {Name:pause-939189 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16191/minikube-v1.29.0-1680115329-16191-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.3 ClusterName:pause-9
39189 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.142 Port:8443 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:fa
lse registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0331 18:03:57.918410   32536 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0331 18:03:58.044485   32536 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0331 18:03:58.075416   32536 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I0331 18:03:58.075438   32536 kubeadm.go:633] restartCluster start
	I0331 18:03:58.075500   32536 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0331 18:03:58.094934   32536 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0331 18:03:58.095861   32536 kubeconfig.go:92] found "pause-939189" server: "https://192.168.39.142:8443"
	I0331 18:03:58.097153   32536 kapi.go:59] client config for pause-939189: &rest.Config{Host:"https://192.168.39.142:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16144-3494/.minikube/profiles/pause-939189/client.crt", KeyFile:"/home/jenkins/minikube-integration/16144-3494/.minikube/profiles/pause-939189/client.key", CAFile:"/home/jenkins/minikube-integration/16144-3494/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x192bee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0331 18:03:58.098282   32536 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0331 18:03:58.138626   32536 api_server.go:165] Checking apiserver status ...
	I0331 18:03:58.138701   32536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 18:03:58.165367   32536 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 18:03:58.665630   32536 api_server.go:165] Checking apiserver status ...
	I0331 18:03:58.665725   32536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 18:03:58.688400   32536 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5686/cgroup
	I0331 18:03:58.711356   32536 api_server.go:181] apiserver freezer: "11:freezer:/kubepods/burstable/podc554e721c1674f8d3807d01647788069/874fcc56f9f627f0ba77d60510e34b52a8bc53cc9dfd44bf1837048106c10090"
	I0331 18:03:58.711430   32536 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podc554e721c1674f8d3807d01647788069/874fcc56f9f627f0ba77d60510e34b52a8bc53cc9dfd44bf1837048106c10090/freezer.state
	I0331 18:03:58.729765   32536 api_server.go:203] freezer state: "THAWED"
	I0331 18:03:58.729842   32536 api_server.go:252] Checking apiserver healthz at https://192.168.39.142:8443/healthz ...
	I0331 18:04:03.730635   32536 api_server.go:268] stopped: https://192.168.39.142:8443/healthz: Get "https://192.168.39.142:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0331 18:04:03.730708   32536 retry.go:31] will retry after 264.872025ms: state is "Stopped"
	I0331 18:04:03.996172   32536 api_server.go:252] Checking apiserver healthz at https://192.168.39.142:8443/healthz ...
	I0331 18:04:08.997074   32536 api_server.go:268] stopped: https://192.168.39.142:8443/healthz: Get "https://192.168.39.142:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0331 18:04:08.997124   32536 retry.go:31] will retry after 349.636902ms: state is "Stopped"
	I0331 18:04:09.347652   32536 api_server.go:252] Checking apiserver healthz at https://192.168.39.142:8443/healthz ...
	I0331 18:04:14.348544   32536 api_server.go:268] stopped: https://192.168.39.142:8443/healthz: Get "https://192.168.39.142:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0331 18:04:14.348606   32536 api_server.go:165] Checking apiserver status ...
	I0331 18:04:14.348662   32536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 18:04:14.373754   32536 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5686/cgroup
	I0331 18:04:14.394637   32536 api_server.go:181] apiserver freezer: "11:freezer:/kubepods/burstable/podc554e721c1674f8d3807d01647788069/874fcc56f9f627f0ba77d60510e34b52a8bc53cc9dfd44bf1837048106c10090"
	I0331 18:04:14.394719   32536 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podc554e721c1674f8d3807d01647788069/874fcc56f9f627f0ba77d60510e34b52a8bc53cc9dfd44bf1837048106c10090/freezer.state
	I0331 18:04:14.407709   32536 api_server.go:203] freezer state: "THAWED"
	I0331 18:04:14.407739   32536 api_server.go:252] Checking apiserver healthz at https://192.168.39.142:8443/healthz ...
	I0331 18:04:19.408261   32536 api_server.go:268] stopped: https://192.168.39.142:8443/healthz: Get "https://192.168.39.142:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0331 18:04:19.408305   32536 retry.go:31] will retry after 215.218871ms: state is "Stopped"
	I0331 18:04:19.624488   32536 api_server.go:252] Checking apiserver healthz at https://192.168.39.142:8443/healthz ...
	I0331 18:04:19.625014   32536 api_server.go:268] stopped: https://192.168.39.142:8443/healthz: Get "https://192.168.39.142:8443/healthz": dial tcp 192.168.39.142:8443: connect: connection refused
	I0331 18:04:19.625050   32536 retry.go:31] will retry after 293.483793ms: state is "Stopped"
	I0331 18:04:19.919513   32536 api_server.go:252] Checking apiserver healthz at https://192.168.39.142:8443/healthz ...
	I0331 18:04:19.920238   32536 api_server.go:268] stopped: https://192.168.39.142:8443/healthz: Get "https://192.168.39.142:8443/healthz": dial tcp 192.168.39.142:8443: connect: connection refused
	I0331 18:04:19.920283   32536 retry.go:31] will retry after 486.512463ms: state is "Stopped"
	I0331 18:04:20.407398   32536 api_server.go:252] Checking apiserver healthz at https://192.168.39.142:8443/healthz ...
	I0331 18:04:20.408179   32536 api_server.go:268] stopped: https://192.168.39.142:8443/healthz: Get "https://192.168.39.142:8443/healthz": dial tcp 192.168.39.142:8443: connect: connection refused
	I0331 18:04:20.408229   32536 retry.go:31] will retry after 404.6604ms: state is "Stopped"
	I0331 18:04:20.813782   32536 api_server.go:252] Checking apiserver healthz at https://192.168.39.142:8443/healthz ...
	I0331 18:04:20.814451   32536 api_server.go:268] stopped: https://192.168.39.142:8443/healthz: Get "https://192.168.39.142:8443/healthz": dial tcp 192.168.39.142:8443: connect: connection refused
	I0331 18:04:20.814507   32536 retry.go:31] will retry after 641.020358ms: state is "Stopped"
	I0331 18:04:21.456361   32536 api_server.go:252] Checking apiserver healthz at https://192.168.39.142:8443/healthz ...
	I0331 18:04:21.457048   32536 api_server.go:268] stopped: https://192.168.39.142:8443/healthz: Get "https://192.168.39.142:8443/healthz": dial tcp 192.168.39.142:8443: connect: connection refused
	I0331 18:04:21.457086   32536 retry.go:31] will retry after 754.462657ms: state is "Stopped"
	I0331 18:04:22.211721   32536 api_server.go:252] Checking apiserver healthz at https://192.168.39.142:8443/healthz ...
	I0331 18:04:22.212356   32536 api_server.go:268] stopped: https://192.168.39.142:8443/healthz: Get "https://192.168.39.142:8443/healthz": dial tcp 192.168.39.142:8443: connect: connection refused
	I0331 18:04:22.212406   32536 retry.go:31] will retry after 1.115104449s: state is "Stopped"
	I0331 18:04:23.328674   32536 api_server.go:252] Checking apiserver healthz at https://192.168.39.142:8443/healthz ...
	I0331 18:04:23.329495   32536 api_server.go:268] stopped: https://192.168.39.142:8443/healthz: Get "https://192.168.39.142:8443/healthz": dial tcp 192.168.39.142:8443: connect: connection refused
	I0331 18:04:23.329540   32536 retry.go:31] will retry after 1.24240954s: state is "Stopped"
	I0331 18:04:24.572925   32536 api_server.go:252] Checking apiserver healthz at https://192.168.39.142:8443/healthz ...
	I0331 18:04:24.573488   32536 api_server.go:268] stopped: https://192.168.39.142:8443/healthz: Get "https://192.168.39.142:8443/healthz": dial tcp 192.168.39.142:8443: connect: connection refused
	I0331 18:04:24.573527   32536 retry.go:31] will retry after 1.380365448s: state is "Stopped"
	I0331 18:04:25.954682   32536 api_server.go:252] Checking apiserver healthz at https://192.168.39.142:8443/healthz ...
	I0331 18:04:25.955486   32536 api_server.go:268] stopped: https://192.168.39.142:8443/healthz: Get "https://192.168.39.142:8443/healthz": dial tcp 192.168.39.142:8443: connect: connection refused
	I0331 18:04:25.955533   32536 retry.go:31] will retry after 1.543167733s: state is "Stopped"
	I0331 18:04:27.499418   32536 api_server.go:252] Checking apiserver healthz at https://192.168.39.142:8443/healthz ...
	I0331 18:04:27.500028   32536 api_server.go:268] stopped: https://192.168.39.142:8443/healthz: Get "https://192.168.39.142:8443/healthz": dial tcp 192.168.39.142:8443: connect: connection refused
	I0331 18:04:27.500074   32536 kubeadm.go:608] needs reconfigure: apiserver error: timed out waiting for the condition
	I0331 18:04:27.500080   32536 kubeadm.go:1120] stopping kube-system containers ...
	I0331 18:04:27.500125   32536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0331 18:04:27.535578   32536 docker.go:465] Stopping containers: [a0ad0a35a3e0 b4599f5bff86 9999f58d2765 b400c024f135 5e8b08d2a8f2 874fcc56f9f6 8ace7d6c4bee b034146fe7e8 6981b4d73a6c f5b35d44675c c447bce0c8ae 4045aa0f265a 69e745cdf53a 591b321a8a1e d2157bcefdc1 811553fdd488 34d917f20f26 5d91e467d3df d0750d4bcfa2 e65d5faade51 6bf6a130793f b9cb25554741]
	I0331 18:04:27.535661   32536 ssh_runner.go:195] Run: docker stop a0ad0a35a3e0 b4599f5bff86 9999f58d2765 b400c024f135 5e8b08d2a8f2 874fcc56f9f6 8ace7d6c4bee b034146fe7e8 6981b4d73a6c f5b35d44675c c447bce0c8ae 4045aa0f265a 69e745cdf53a 591b321a8a1e d2157bcefdc1 811553fdd488 34d917f20f26 5d91e467d3df d0750d4bcfa2 e65d5faade51 6bf6a130793f b9cb25554741
	I0331 18:04:32.786301   32536 ssh_runner.go:235] Completed: docker stop a0ad0a35a3e0 b4599f5bff86 9999f58d2765 b400c024f135 5e8b08d2a8f2 874fcc56f9f6 8ace7d6c4bee b034146fe7e8 6981b4d73a6c f5b35d44675c c447bce0c8ae 4045aa0f265a 69e745cdf53a 591b321a8a1e d2157bcefdc1 811553fdd488 34d917f20f26 5d91e467d3df d0750d4bcfa2 e65d5faade51 6bf6a130793f b9cb25554741: (5.25059671s)
	I0331 18:04:32.786382   32536 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0331 18:04:32.827244   32536 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0331 18:04:32.840888   32536 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Mar 31 18:02 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5654 Mar 31 18:02 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Mar 31 18:02 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5606 Mar 31 18:02 /etc/kubernetes/scheduler.conf
	
	I0331 18:04:32.840952   32536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0331 18:04:32.851947   32536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0331 18:04:32.861389   32536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0331 18:04:32.870904   32536 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0331 18:04:32.870958   32536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0331 18:04:32.879952   32536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0331 18:04:32.890337   32536 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0331 18:04:32.890401   32536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0331 18:04:32.902509   32536 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0331 18:04:32.912358   32536 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0331 18:04:32.912383   32536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0331 18:04:33.049768   32536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0331 18:04:34.137543   32536 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.087742882s)
	I0331 18:04:34.137570   32536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0331 18:04:34.364187   32536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0331 18:04:34.455055   32536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0331 18:04:34.561978   32536 api_server.go:51] waiting for apiserver process to appear ...
	I0331 18:04:34.562036   32536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 18:04:34.580421   32536 api_server.go:71] duration metric: took 18.440625ms to wait for apiserver process to appear ...
	I0331 18:04:34.580451   32536 api_server.go:87] waiting for apiserver healthz status ...
	I0331 18:04:34.580460   32536 api_server.go:252] Checking apiserver healthz at https://192.168.39.142:8443/healthz ...
	I0331 18:04:39.272172   32536 api_server.go:278] https://192.168.39.142:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0331 18:04:39.272207   32536 api_server.go:102] status: https://192.168.39.142:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0331 18:04:39.772935   32536 api_server.go:252] Checking apiserver healthz at https://192.168.39.142:8443/healthz ...
	I0331 18:04:39.777965   32536 api_server.go:278] https://192.168.39.142:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0331 18:04:39.777990   32536 api_server.go:102] status: https://192.168.39.142:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0331 18:04:40.272400   32536 api_server.go:252] Checking apiserver healthz at https://192.168.39.142:8443/healthz ...
	I0331 18:04:40.278072   32536 api_server.go:278] https://192.168.39.142:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0331 18:04:40.278095   32536 api_server.go:102] status: https://192.168.39.142:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0331 18:04:40.772913   32536 api_server.go:252] Checking apiserver healthz at https://192.168.39.142:8443/healthz ...
	I0331 18:04:40.779114   32536 api_server.go:278] https://192.168.39.142:8443/healthz returned 200:
	ok
	I0331 18:04:40.795833   32536 api_server.go:140] control plane version: v1.26.3
	I0331 18:04:40.795865   32536 api_server.go:130] duration metric: took 6.215408419s to wait for apiserver health ...
	I0331 18:04:40.795876   32536 cni.go:84] Creating CNI manager for ""
	I0331 18:04:40.795891   32536 cni.go:157] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0331 18:04:40.797284   32536 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0331 18:04:40.798815   32536 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0331 18:04:40.826544   32536 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0331 18:04:40.864890   32536 system_pods.go:43] waiting for kube-system pods to appear ...
	I0331 18:04:40.873818   32536 system_pods.go:59] 6 kube-system pods found
	I0331 18:04:40.873850   32536 system_pods.go:61] "coredns-787d4945fb-hcrtc" [1e78e1f9-1a39-4c02-a4e9-51e5b268d077] Running
	I0331 18:04:40.873858   32536 system_pods.go:61] "etcd-pause-939189" [cdc68c44-f3a4-4655-9818-48f074e8e376] Running
	I0331 18:04:40.873864   32536 system_pods.go:61] "kube-apiserver-pause-939189" [c40b018d-97b2-4cdf-9edc-e1473d304c55] Running
	I0331 18:04:40.873869   32536 system_pods.go:61] "kube-controller-manager-pause-939189" [69a62fcf-5db8-4354-aa08-ee5d2209a0ed] Running
	I0331 18:04:40.873875   32536 system_pods.go:61] "kube-proxy-jg8p6" [dd3378f4-948b-4bec-abd3-ea9dc35d3259] Running
	I0331 18:04:40.873881   32536 system_pods.go:61] "kube-scheduler-pause-939189" [b51eb2f5-8508-46f2-8c02-652ad1a69a1e] Running
	I0331 18:04:40.873889   32536 system_pods.go:74] duration metric: took 8.977073ms to wait for pod list to return data ...
	I0331 18:04:40.873899   32536 node_conditions.go:102] verifying NodePressure condition ...
	I0331 18:04:40.878737   32536 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0331 18:04:40.878762   32536 node_conditions.go:123] node cpu capacity is 2
	I0331 18:04:40.878773   32536 node_conditions.go:105] duration metric: took 4.86834ms to run NodePressure ...
	I0331 18:04:40.878791   32536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0331 18:04:41.336529   32536 kubeadm.go:769] waiting for restarted kubelet to initialise ...
	I0331 18:04:41.345471   32536 kubeadm.go:784] kubelet initialised
	I0331 18:04:41.345500   32536 kubeadm.go:785] duration metric: took 8.940253ms waiting for restarted kubelet to initialise ...
	I0331 18:04:41.345509   32536 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0331 18:04:41.351874   32536 pod_ready.go:78] waiting up to 4m0s for pod "coredns-787d4945fb-hcrtc" in "kube-system" namespace to be "Ready" ...
	I0331 18:04:43.384606   32536 pod_ready.go:102] pod "coredns-787d4945fb-hcrtc" in "kube-system" namespace has status "Ready":"False"
	I0331 18:04:45.387401   32536 pod_ready.go:102] pod "coredns-787d4945fb-hcrtc" in "kube-system" namespace has status "Ready":"False"
	I0331 18:04:46.401715   32536 pod_ready.go:92] pod "coredns-787d4945fb-hcrtc" in "kube-system" namespace has status "Ready":"True"
	I0331 18:04:46.401752   32536 pod_ready.go:81] duration metric: took 5.049857013s waiting for pod "coredns-787d4945fb-hcrtc" in "kube-system" namespace to be "Ready" ...
	I0331 18:04:46.401766   32536 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-939189" in "kube-system" namespace to be "Ready" ...
	I0331 18:04:48.421495   32536 pod_ready.go:102] pod "etcd-pause-939189" in "kube-system" namespace has status "Ready":"False"
	I0331 18:04:50.765070   32536 pod_ready.go:102] pod "etcd-pause-939189" in "kube-system" namespace has status "Ready":"False"
	I0331 18:04:52.921656   32536 pod_ready.go:102] pod "etcd-pause-939189" in "kube-system" namespace has status "Ready":"False"
	I0331 18:04:53.421399   32536 pod_ready.go:92] pod "etcd-pause-939189" in "kube-system" namespace has status "Ready":"True"
	I0331 18:04:53.421429   32536 pod_ready.go:81] duration metric: took 7.01965493s waiting for pod "etcd-pause-939189" in "kube-system" namespace to be "Ready" ...
	I0331 18:04:53.421441   32536 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-939189" in "kube-system" namespace to be "Ready" ...
	I0331 18:04:53.429675   32536 pod_ready.go:92] pod "kube-apiserver-pause-939189" in "kube-system" namespace has status "Ready":"True"
	I0331 18:04:53.429697   32536 pod_ready.go:81] duration metric: took 8.249323ms waiting for pod "kube-apiserver-pause-939189" in "kube-system" namespace to be "Ready" ...
	I0331 18:04:53.429708   32536 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-939189" in "kube-system" namespace to be "Ready" ...
	I0331 18:04:53.438704   32536 pod_ready.go:92] pod "kube-controller-manager-pause-939189" in "kube-system" namespace has status "Ready":"True"
	I0331 18:04:53.438720   32536 pod_ready.go:81] duration metric: took 9.003572ms waiting for pod "kube-controller-manager-pause-939189" in "kube-system" namespace to be "Ready" ...
	I0331 18:04:53.438731   32536 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jg8p6" in "kube-system" namespace to be "Ready" ...
	I0331 18:04:53.446519   32536 pod_ready.go:92] pod "kube-proxy-jg8p6" in "kube-system" namespace has status "Ready":"True"
	I0331 18:04:53.446534   32536 pod_ready.go:81] duration metric: took 7.795873ms waiting for pod "kube-proxy-jg8p6" in "kube-system" namespace to be "Ready" ...
	I0331 18:04:53.446545   32536 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-939189" in "kube-system" namespace to be "Ready" ...
	I0331 18:04:53.451227   32536 pod_ready.go:92] pod "kube-scheduler-pause-939189" in "kube-system" namespace has status "Ready":"True"
	I0331 18:04:53.451242   32536 pod_ready.go:81] duration metric: took 4.691126ms waiting for pod "kube-scheduler-pause-939189" in "kube-system" namespace to be "Ready" ...
	I0331 18:04:53.451250   32536 pod_ready.go:38] duration metric: took 12.105730649s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0331 18:04:53.451272   32536 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0331 18:04:53.463906   32536 ops.go:34] apiserver oom_adj: -16
	I0331 18:04:53.463925   32536 kubeadm.go:637] restartCluster took 55.388480099s
	I0331 18:04:53.463933   32536 kubeadm.go:403] StartCluster complete in 55.545742823s
	I0331 18:04:53.463952   32536 settings.go:142] acquiring lock: {Name:mk54cf97b6d1b5b12dec7aad9dd26d754e62bcd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0331 18:04:53.464032   32536 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16144-3494/kubeconfig
	I0331 18:04:53.464825   32536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16144-3494/kubeconfig: {Name:mk0e63c10dbce63578041d9db05c951415a42011 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0331 18:04:53.465096   32536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0331 18:04:53.465243   32536 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
	I0331 18:04:53.465315   32536 config.go:182] Loaded profile config "pause-939189": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.3
	I0331 18:04:53.465367   32536 cache.go:107] acquiring lock: {Name:mka2cf660dd4d542e74644eb9f55d9546287db85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0331 18:04:53.465432   32536 cache.go:115] /home/jenkins/minikube-integration/16144-3494/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 exists
	I0331 18:04:53.468377   32536 out.go:177] * Enabled addons: 
	I0331 18:04:53.465440   32536 cache.go:96] cache image "gcr.io/k8s-minikube/gvisor-addon:2" -> "/home/jenkins/minikube-integration/16144-3494/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2" took 77.875µs
	I0331 18:04:53.465689   32536 kapi.go:59] client config for pause-939189: &rest.Config{Host:"https://192.168.39.142:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16144-3494/.minikube/profiles/pause-939189/client.crt", KeyFile:"/home/jenkins/minikube-integration/16144-3494/.minikube/profiles/pause-939189/client.key", CAFile:"/home/jenkins/minikube-integration/16144-3494/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x192bee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0331 18:04:53.469869   32536 addons.go:499] enable addons completed in 4.62348ms: enabled=[]
	I0331 18:04:53.469887   32536 cache.go:80] save to tar file gcr.io/k8s-minikube/gvisor-addon:2 -> /home/jenkins/minikube-integration/16144-3494/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 succeeded
	I0331 18:04:53.469904   32536 cache.go:87] Successfully saved all images to host disk.
	I0331 18:04:53.470079   32536 config.go:182] Loaded profile config "pause-939189": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.3
	I0331 18:04:53.470390   32536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0331 18:04:53.470414   32536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0331 18:04:53.472779   32536 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-939189" context rescaled to 1 replicas
	I0331 18:04:53.472816   32536 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.142 Port:8443 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0331 18:04:53.474464   32536 out.go:177] * Verifying Kubernetes components...
	I0331 18:04:53.475854   32536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0331 18:04:53.487310   32536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41011
	I0331 18:04:53.487911   32536 main.go:141] libmachine: () Calling .GetVersion
	I0331 18:04:53.488552   32536 main.go:141] libmachine: Using API Version  1
	I0331 18:04:53.488581   32536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0331 18:04:53.488899   32536 main.go:141] libmachine: () Calling .GetMachineName
	I0331 18:04:53.489075   32536 main.go:141] libmachine: (pause-939189) Calling .GetState
	I0331 18:04:53.491520   32536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0331 18:04:53.491556   32536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0331 18:04:53.508789   32536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36597
	I0331 18:04:53.509289   32536 main.go:141] libmachine: () Calling .GetVersion
	I0331 18:04:53.509835   32536 main.go:141] libmachine: Using API Version  1
	I0331 18:04:53.509862   32536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0331 18:04:53.510320   32536 main.go:141] libmachine: () Calling .GetMachineName
	I0331 18:04:53.510605   32536 main.go:141] libmachine: (pause-939189) Calling .DriverName
	I0331 18:04:53.510836   32536 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0331 18:04:53.510866   32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHHostname
	I0331 18:04:53.514674   32536 main.go:141] libmachine: (pause-939189) DBG | domain pause-939189 has defined MAC address 52:54:00:80:0d:fe in network mk-pause-939189
	I0331 18:04:53.515275   32536 main.go:141] libmachine: (pause-939189) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:0d:fe", ip: ""} in network mk-pause-939189: {Iface:virbr1 ExpiryTime:2023-03-31 19:02:23 +0000 UTC Type:0 Mac:52:54:00:80:0d:fe Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:pause-939189 Clientid:01:52:54:00:80:0d:fe}
	I0331 18:04:53.515296   32536 main.go:141] libmachine: (pause-939189) DBG | domain pause-939189 has defined IP address 192.168.39.142 and MAC address 52:54:00:80:0d:fe in network mk-pause-939189
	I0331 18:04:53.515586   32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHPort
	I0331 18:04:53.515793   32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHKeyPath
	I0331 18:04:53.515965   32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHUsername
	I0331 18:04:53.516121   32536 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16144-3494/.minikube/machines/pause-939189/id_rsa Username:docker}
	I0331 18:04:53.632891   32536 node_ready.go:35] waiting up to 6m0s for node "pause-939189" to be "Ready" ...
	I0331 18:04:53.633113   32536 start.go:889] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0331 18:04:53.637258   32536 node_ready.go:49] node "pause-939189" has status "Ready":"True"
	I0331 18:04:53.637275   32536 node_ready.go:38] duration metric: took 4.35255ms waiting for node "pause-939189" to be "Ready" ...
	I0331 18:04:53.637285   32536 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0331 18:04:53.668203   32536 docker.go:639] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.26.3
	registry.k8s.io/kube-scheduler:v1.26.3
	registry.k8s.io/kube-controller-manager:v1.26.3
	registry.k8s.io/kube-proxy:v1.26.3
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0331 18:04:53.668226   32536 cache_images.go:84] Images are preloaded, skipping loading
	I0331 18:04:53.668235   32536 cache_images.go:262] succeeded pushing to: pause-939189
	I0331 18:04:53.668239   32536 cache_images.go:263] failed pushing to: 
	I0331 18:04:53.668267   32536 main.go:141] libmachine: Making call to close driver server
	I0331 18:04:53.668284   32536 main.go:141] libmachine: (pause-939189) Calling .Close
	I0331 18:04:53.668596   32536 main.go:141] libmachine: Successfully made call to close driver server
	I0331 18:04:53.668613   32536 main.go:141] libmachine: Making call to close connection to plugin binary
	I0331 18:04:53.668625   32536 main.go:141] libmachine: (pause-939189) DBG | Closing plugin on server side
	I0331 18:04:53.668625   32536 main.go:141] libmachine: Making call to close driver server
	I0331 18:04:53.668641   32536 main.go:141] libmachine: (pause-939189) Calling .Close
	I0331 18:04:53.668916   32536 main.go:141] libmachine: (pause-939189) DBG | Closing plugin on server side
	I0331 18:04:53.668922   32536 main.go:141] libmachine: Successfully made call to close driver server
	I0331 18:04:53.668942   32536 main.go:141] libmachine: Making call to close connection to plugin binary
	I0331 18:04:53.821124   32536 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-hcrtc" in "kube-system" namespace to be "Ready" ...
	I0331 18:04:54.218332   32536 pod_ready.go:92] pod "coredns-787d4945fb-hcrtc" in "kube-system" namespace has status "Ready":"True"
	I0331 18:04:54.218358   32536 pod_ready.go:81] duration metric: took 397.210316ms waiting for pod "coredns-787d4945fb-hcrtc" in "kube-system" namespace to be "Ready" ...
	I0331 18:04:54.218367   32536 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-939189" in "kube-system" namespace to be "Ready" ...
	I0331 18:04:54.618607   32536 pod_ready.go:92] pod "etcd-pause-939189" in "kube-system" namespace has status "Ready":"True"
	I0331 18:04:54.618631   32536 pod_ready.go:81] duration metric: took 400.255347ms waiting for pod "etcd-pause-939189" in "kube-system" namespace to be "Ready" ...
	I0331 18:04:54.618640   32536 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-939189" in "kube-system" namespace to be "Ready" ...
	I0331 18:04:55.019356   32536 pod_ready.go:92] pod "kube-apiserver-pause-939189" in "kube-system" namespace has status "Ready":"True"
	I0331 18:04:55.019378   32536 pod_ready.go:81] duration metric: took 400.731414ms waiting for pod "kube-apiserver-pause-939189" in "kube-system" namespace to be "Ready" ...
	I0331 18:04:55.019393   32536 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-939189" in "kube-system" namespace to be "Ready" ...
	I0331 18:04:55.420085   32536 pod_ready.go:92] pod "kube-controller-manager-pause-939189" in "kube-system" namespace has status "Ready":"True"
	I0331 18:04:55.420114   32536 pod_ready.go:81] duration metric: took 400.711919ms waiting for pod "kube-controller-manager-pause-939189" in "kube-system" namespace to be "Ready" ...
	I0331 18:04:55.420130   32536 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jg8p6" in "kube-system" namespace to be "Ready" ...
	I0331 18:04:55.819685   32536 pod_ready.go:92] pod "kube-proxy-jg8p6" in "kube-system" namespace has status "Ready":"True"
	I0331 18:04:55.819705   32536 pod_ready.go:81] duration metric: took 399.567435ms waiting for pod "kube-proxy-jg8p6" in "kube-system" namespace to be "Ready" ...
	I0331 18:04:55.819719   32536 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-939189" in "kube-system" namespace to be "Ready" ...
	I0331 18:04:56.219488   32536 pod_ready.go:92] pod "kube-scheduler-pause-939189" in "kube-system" namespace has status "Ready":"True"
	I0331 18:04:56.219513   32536 pod_ready.go:81] duration metric: took 399.783789ms waiting for pod "kube-scheduler-pause-939189" in "kube-system" namespace to be "Ready" ...
	I0331 18:04:56.219524   32536 pod_ready.go:38] duration metric: took 2.582225755s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0331 18:04:56.219550   32536 api_server.go:51] waiting for apiserver process to appear ...
	I0331 18:04:56.219595   32536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 18:04:56.240919   32536 api_server.go:71] duration metric: took 2.768070005s to wait for apiserver process to appear ...
	I0331 18:04:56.240947   32536 api_server.go:87] waiting for apiserver healthz status ...
	I0331 18:04:56.240961   32536 api_server.go:252] Checking apiserver healthz at https://192.168.39.142:8443/healthz ...
	I0331 18:04:56.247401   32536 api_server.go:278] https://192.168.39.142:8443/healthz returned 200:
	ok
	I0331 18:04:56.248689   32536 api_server.go:140] control plane version: v1.26.3
	I0331 18:04:56.248709   32536 api_server.go:130] duration metric: took 7.754551ms to wait for apiserver health ...
	I0331 18:04:56.248718   32536 system_pods.go:43] waiting for kube-system pods to appear ...
	I0331 18:04:56.422125   32536 system_pods.go:59] 6 kube-system pods found
	I0331 18:04:56.422151   32536 system_pods.go:61] "coredns-787d4945fb-hcrtc" [1e78e1f9-1a39-4c02-a4e9-51e5b268d077] Running
	I0331 18:04:56.422159   32536 system_pods.go:61] "etcd-pause-939189" [cdc68c44-f3a4-4655-9818-48f074e8e376] Running
	I0331 18:04:56.422166   32536 system_pods.go:61] "kube-apiserver-pause-939189" [c40b018d-97b2-4cdf-9edc-e1473d304c55] Running
	I0331 18:04:56.422174   32536 system_pods.go:61] "kube-controller-manager-pause-939189" [69a62fcf-5db8-4354-aa08-ee5d2209a0ed] Running
	I0331 18:04:56.422181   32536 system_pods.go:61] "kube-proxy-jg8p6" [dd3378f4-948b-4bec-abd3-ea9dc35d3259] Running
	I0331 18:04:56.422187   32536 system_pods.go:61] "kube-scheduler-pause-939189" [b51eb2f5-8508-46f2-8c02-652ad1a69a1e] Running
	I0331 18:04:56.422193   32536 system_pods.go:74] duration metric: took 173.469145ms to wait for pod list to return data ...
	I0331 18:04:56.422202   32536 default_sa.go:34] waiting for default service account to be created ...
	I0331 18:04:56.618165   32536 default_sa.go:45] found service account: "default"
	I0331 18:04:56.618190   32536 default_sa.go:55] duration metric: took 195.978567ms for default service account to be created ...
	I0331 18:04:56.618200   32536 system_pods.go:116] waiting for k8s-apps to be running ...
	I0331 18:04:56.823045   32536 system_pods.go:86] 6 kube-system pods found
	I0331 18:04:56.823082   32536 system_pods.go:89] "coredns-787d4945fb-hcrtc" [1e78e1f9-1a39-4c02-a4e9-51e5b268d077] Running
	I0331 18:04:56.823092   32536 system_pods.go:89] "etcd-pause-939189" [cdc68c44-f3a4-4655-9818-48f074e8e376] Running
	I0331 18:04:56.823099   32536 system_pods.go:89] "kube-apiserver-pause-939189" [c40b018d-97b2-4cdf-9edc-e1473d304c55] Running
	I0331 18:04:56.823107   32536 system_pods.go:89] "kube-controller-manager-pause-939189" [69a62fcf-5db8-4354-aa08-ee5d2209a0ed] Running
	I0331 18:04:56.823113   32536 system_pods.go:89] "kube-proxy-jg8p6" [dd3378f4-948b-4bec-abd3-ea9dc35d3259] Running
	I0331 18:04:56.823120   32536 system_pods.go:89] "kube-scheduler-pause-939189" [b51eb2f5-8508-46f2-8c02-652ad1a69a1e] Running
	I0331 18:04:56.823129   32536 system_pods.go:126] duration metric: took 204.923041ms to wait for k8s-apps to be running ...
	I0331 18:04:56.823144   32536 system_svc.go:44] waiting for kubelet service to be running ....
	I0331 18:04:56.823194   32536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0331 18:04:56.843108   32536 system_svc.go:56] duration metric: took 19.952106ms WaitForService to wait for kubelet.
	I0331 18:04:56.843157   32536 kubeadm.go:578] duration metric: took 3.370313636s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0331 18:04:56.843181   32536 node_conditions.go:102] verifying NodePressure condition ...
	I0331 18:04:57.019150   32536 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0331 18:04:57.019178   32536 node_conditions.go:123] node cpu capacity is 2
	I0331 18:04:57.019188   32536 node_conditions.go:105] duration metric: took 176.00176ms to run NodePressure ...
	I0331 18:04:57.019201   32536 start.go:228] waiting for startup goroutines ...
	I0331 18:04:57.019209   32536 start.go:233] waiting for cluster config update ...
	I0331 18:04:57.019219   32536 start.go:242] writing updated cluster config ...
	I0331 18:04:57.019587   32536 ssh_runner.go:195] Run: rm -f paused
	I0331 18:04:57.094738   32536 start.go:557] kubectl: 1.26.3, cluster: 1.26.3 (minor skew: 0)
	I0331 18:04:57.097707   32536 out.go:177] * Done! kubectl is now configured to use "pause-939189" cluster and "default" namespace by default

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-939189 -n pause-939189
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-939189 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-939189 logs -n 25: (1.243080205s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                      Args                      |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p stopped-upgrade-202435                      | stopped-upgrade-202435    | jenkins | v1.29.0 | 31 Mar 23 18:00 UTC | 31 Mar 23 18:02 UTC |
	|         | --memory=2200                                  |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                         |                           |         |         |                     |                     |
	|         | --driver=kvm2                                  |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-075589                   | kubernetes-upgrade-075589 | jenkins | v1.29.0 | 31 Mar 23 18:01 UTC |                     |
	|         | --memory=2200                                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                   |                           |         |         |                     |                     |
	|         | --driver=kvm2                                  |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-075589                   | kubernetes-upgrade-075589 | jenkins | v1.29.0 | 31 Mar 23 18:01 UTC | 31 Mar 23 18:02 UTC |
	|         | --memory=2200                                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.0-rc.0              |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                         |                           |         |         |                     |                     |
	|         | --driver=kvm2                                  |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-549601                      | cert-expiration-549601    | jenkins | v1.29.0 | 31 Mar 23 18:01 UTC | 31 Mar 23 18:01 UTC |
	| start   | -p pause-939189 --memory=2048                  | pause-939189              | jenkins | v1.29.0 | 31 Mar 23 18:01 UTC | 31 Mar 23 18:03 UTC |
	|         | --install-addons=false                         |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2                       |                           |         |         |                     |                     |
	| cache   | gvisor-836132 cache add                        | gvisor-836132             | jenkins | v1.29.0 | 31 Mar 23 18:01 UTC | 31 Mar 23 18:02 UTC |
	|         | gcr.io/k8s-minikube/gvisor-addon:2             |                           |         |         |                     |                     |
	| addons  | gvisor-836132 addons enable                    | gvisor-836132             | jenkins | v1.29.0 | 31 Mar 23 18:02 UTC | 31 Mar 23 18:02 UTC |
	|         | gvisor                                         |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-202435                      | stopped-upgrade-202435    | jenkins | v1.29.0 | 31 Mar 23 18:02 UTC | 31 Mar 23 18:02 UTC |
	| start   | -p force-systemd-env-066234                    | force-systemd-env-066234  | jenkins | v1.29.0 | 31 Mar 23 18:02 UTC | 31 Mar 23 18:03 UTC |
	|         | --memory=2048                                  |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                         |                           |         |         |                     |                     |
	|         | --driver=kvm2                                  |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-075589                   | kubernetes-upgrade-075589 | jenkins | v1.29.0 | 31 Mar 23 18:02 UTC | 31 Mar 23 18:02 UTC |
	| start   | -p cert-options-885841                         | cert-options-885841       | jenkins | v1.29.0 | 31 Mar 23 18:02 UTC | 31 Mar 23 18:04 UTC |
	|         | --memory=2048                                  |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                      |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                  |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost                    |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com               |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                          |                           |         |         |                     |                     |
	|         | --driver=kvm2                                  |                           |         |         |                     |                     |
	| stop    | -p gvisor-836132                               | gvisor-836132             | jenkins | v1.29.0 | 31 Mar 23 18:02 UTC | 31 Mar 23 18:04 UTC |
	| start   | -p pause-939189                                | pause-939189              | jenkins | v1.29.0 | 31 Mar 23 18:03 UTC | 31 Mar 23 18:04 UTC |
	|         | --alsologtostderr -v=1                         |                           |         |         |                     |                     |
	|         | --driver=kvm2                                  |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-066234                       | force-systemd-env-066234  | jenkins | v1.29.0 | 31 Mar 23 18:03 UTC | 31 Mar 23 18:03 UTC |
	|         | ssh docker info --format                       |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                              |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-066234                    | force-systemd-env-066234  | jenkins | v1.29.0 | 31 Mar 23 18:03 UTC | 31 Mar 23 18:03 UTC |
	| start   | -p NoKubernetes-746317                         | NoKubernetes-746317       | jenkins | v1.29.0 | 31 Mar 23 18:03 UTC |                     |
	|         | --no-kubernetes                                |                           |         |         |                     |                     |
	|         | --kubernetes-version=1.20                      |                           |         |         |                     |                     |
	|         | --driver=kvm2                                  |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-746317                         | NoKubernetes-746317       | jenkins | v1.29.0 | 31 Mar 23 18:03 UTC |                     |
	|         | --driver=kvm2                                  |                           |         |         |                     |                     |
	| ssh     | cert-options-885841 ssh                        | cert-options-885841       | jenkins | v1.29.0 | 31 Mar 23 18:04 UTC | 31 Mar 23 18:04 UTC |
	|         | openssl x509 -text -noout -in                  |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt          |                           |         |         |                     |                     |
	| ssh     | -p cert-options-885841 -- sudo                 | cert-options-885841       | jenkins | v1.29.0 | 31 Mar 23 18:04 UTC | 31 Mar 23 18:04 UTC |
	|         | cat /etc/kubernetes/admin.conf                 |                           |         |         |                     |                     |
	| delete  | -p cert-options-885841                         | cert-options-885841       | jenkins | v1.29.0 | 31 Mar 23 18:04 UTC | 31 Mar 23 18:04 UTC |
	| start   | -p auto-347180 --memory=3072                   | auto-347180               | jenkins | v1.29.0 | 31 Mar 23 18:04 UTC |                     |
	|         | --alsologtostderr --wait=true                  |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                             |                           |         |         |                     |                     |
	|         | --driver=kvm2                                  |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-746317                         | NoKubernetes-746317       | jenkins | v1.29.0 | 31 Mar 23 18:04 UTC | 31 Mar 23 18:04 UTC |
	|         | --no-kubernetes --driver=kvm2                  |                           |         |         |                     |                     |
	| start   | -p gvisor-836132 --memory=2200                 | gvisor-836132             | jenkins | v1.29.0 | 31 Mar 23 18:04 UTC |                     |
	|         | --container-runtime=containerd --docker-opt    |                           |         |         |                     |                     |
	|         | containerd=/var/run/containerd/containerd.sock |                           |         |         |                     |                     |
	|         | --driver=kvm2                                  |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-746317                         | NoKubernetes-746317       | jenkins | v1.29.0 | 31 Mar 23 18:04 UTC | 31 Mar 23 18:04 UTC |
	| start   | -p NoKubernetes-746317                         | NoKubernetes-746317       | jenkins | v1.29.0 | 31 Mar 23 18:04 UTC |                     |
	|         | --no-kubernetes --driver=kvm2                  |                           |         |         |                     |                     |
	|---------|------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/03/31 18:04:52
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.20.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0331 18:04:52.112989   33820 out.go:296] Setting OutFile to fd 1 ...
	I0331 18:04:52.113170   33820 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0331 18:04:52.113174   33820 out.go:309] Setting ErrFile to fd 2...
	I0331 18:04:52.113180   33820 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0331 18:04:52.113343   33820 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16144-3494/.minikube/bin
	I0331 18:04:52.114025   33820 out.go:303] Setting JSON to false
	I0331 18:04:52.115095   33820 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":2843,"bootTime":1680283049,"procs":255,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1031-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0331 18:04:52.115161   33820 start.go:135] virtualization: kvm guest
	I0331 18:04:52.202763   33820 out.go:177] * [NoKubernetes-746317] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	I0331 18:04:52.295981   33820 out.go:177]   - MINIKUBE_LOCATION=16144
	I0331 18:04:52.295891   33820 notify.go:220] Checking for updates...
	I0331 18:04:52.419505   33820 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0331 18:04:52.544450   33820 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16144-3494/kubeconfig
	I0331 18:04:52.604388   33820 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16144-3494/.minikube
	I0331 18:04:52.606360   33820 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0331 18:04:52.608233   33820 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0331 18:04:52.610384   33820 config.go:182] Loaded profile config "auto-347180": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.3
	I0331 18:04:52.610538   33820 config.go:182] Loaded profile config "gvisor-836132": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.26.3
	I0331 18:04:52.610724   33820 config.go:182] Loaded profile config "pause-939189": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.3
	I0331 18:04:52.610745   33820 start.go:1732] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0331 18:04:52.610778   33820 driver.go:365] Setting default libvirt URI to qemu:///system
	I0331 18:04:52.649175   33820 out.go:177] * Using the kvm2 driver based on user configuration
	I0331 18:04:52.650741   33820 start.go:295] selected driver: kvm2
	I0331 18:04:52.650750   33820 start.go:859] validating driver "kvm2" against <nil>
	I0331 18:04:52.650762   33820 start.go:870] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0331 18:04:52.651120   33820 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0331 18:04:52.651207   33820 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16144-3494/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0331 18:04:52.665942   33820 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.29.0
	I0331 18:04:52.665977   33820 start.go:1732] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0331 18:04:52.665987   33820 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0331 18:04:52.666616   33820 start_flags.go:386] Using suggested 6000MB memory alloc based on sys=32101MB, container=0MB
	I0331 18:04:52.666788   33820 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0331 18:04:52.666808   33820 cni.go:84] Creating CNI manager for ""
	I0331 18:04:52.666818   33820 cni.go:157] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0331 18:04:52.666825   33820 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0331 18:04:52.666832   33820 start_flags.go:319] config:
	{Name:NoKubernetes-746317 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.3 ClusterName:NoKubernetes-746317 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Netw
orkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0331 18:04:52.666906   33820 start.go:1732] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0331 18:04:52.666977   33820 iso.go:125] acquiring lock: {Name:mk48583bcdf05c8e72651ed56790356a32c028b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0331 18:04:52.669123   33820 out.go:177] * Starting minikube without Kubernetes in cluster NoKubernetes-746317
	I0331 18:04:48.155281   33494 main.go:141] libmachine: (gvisor-836132) DBG | domain gvisor-836132 has defined MAC address 52:54:00:99:c5:e3 in network mk-gvisor-836132
	I0331 18:04:48.155871   33494 main.go:141] libmachine: (gvisor-836132) DBG | unable to find current IP address of domain gvisor-836132 in network mk-gvisor-836132
	I0331 18:04:48.155896   33494 main.go:141] libmachine: (gvisor-836132) DBG | I0331 18:04:48.155813   33603 retry.go:31] will retry after 283.128145ms: waiting for machine to come up
	I0331 18:04:48.440401   33494 main.go:141] libmachine: (gvisor-836132) DBG | domain gvisor-836132 has defined MAC address 52:54:00:99:c5:e3 in network mk-gvisor-836132
	I0331 18:04:48.440902   33494 main.go:141] libmachine: (gvisor-836132) DBG | unable to find current IP address of domain gvisor-836132 in network mk-gvisor-836132
	I0331 18:04:48.440924   33494 main.go:141] libmachine: (gvisor-836132) DBG | I0331 18:04:48.440860   33603 retry.go:31] will retry after 410.682274ms: waiting for machine to come up
	I0331 18:04:48.853565   33494 main.go:141] libmachine: (gvisor-836132) DBG | domain gvisor-836132 has defined MAC address 52:54:00:99:c5:e3 in network mk-gvisor-836132
	I0331 18:04:48.854037   33494 main.go:141] libmachine: (gvisor-836132) DBG | unable to find current IP address of domain gvisor-836132 in network mk-gvisor-836132
	I0331 18:04:48.854052   33494 main.go:141] libmachine: (gvisor-836132) DBG | I0331 18:04:48.854000   33603 retry.go:31] will retry after 497.486632ms: waiting for machine to come up
	I0331 18:04:49.353711   33494 main.go:141] libmachine: (gvisor-836132) DBG | domain gvisor-836132 has defined MAC address 52:54:00:99:c5:e3 in network mk-gvisor-836132
	I0331 18:04:49.354221   33494 main.go:141] libmachine: (gvisor-836132) DBG | unable to find current IP address of domain gvisor-836132 in network mk-gvisor-836132
	I0331 18:04:49.354243   33494 main.go:141] libmachine: (gvisor-836132) DBG | I0331 18:04:49.354178   33603 retry.go:31] will retry after 611.052328ms: waiting for machine to come up
	I0331 18:04:49.967240   33494 main.go:141] libmachine: (gvisor-836132) DBG | domain gvisor-836132 has defined MAC address 52:54:00:99:c5:e3 in network mk-gvisor-836132
	I0331 18:04:50.040539   33494 main.go:141] libmachine: (gvisor-836132) DBG | unable to find current IP address of domain gvisor-836132 in network mk-gvisor-836132
	I0331 18:04:50.040577   33494 main.go:141] libmachine: (gvisor-836132) DBG | I0331 18:04:50.040409   33603 retry.go:31] will retry after 763.986572ms: waiting for machine to come up
	I0331 18:04:50.876927   33494 main.go:141] libmachine: (gvisor-836132) DBG | domain gvisor-836132 has defined MAC address 52:54:00:99:c5:e3 in network mk-gvisor-836132
	I0331 18:04:50.877366   33494 main.go:141] libmachine: (gvisor-836132) DBG | unable to find current IP address of domain gvisor-836132 in network mk-gvisor-836132
	I0331 18:04:50.877457   33494 main.go:141] libmachine: (gvisor-836132) DBG | I0331 18:04:50.877308   33603 retry.go:31] will retry after 955.134484ms: waiting for machine to come up
	I0331 18:04:51.834716   33494 main.go:141] libmachine: (gvisor-836132) DBG | domain gvisor-836132 has defined MAC address 52:54:00:99:c5:e3 in network mk-gvisor-836132
	I0331 18:04:51.835256   33494 main.go:141] libmachine: (gvisor-836132) DBG | unable to find current IP address of domain gvisor-836132 in network mk-gvisor-836132
	I0331 18:04:51.835316   33494 main.go:141] libmachine: (gvisor-836132) DBG | I0331 18:04:51.835243   33603 retry.go:31] will retry after 1.216587491s: waiting for machine to come up
	I0331 18:04:53.053498   33494 main.go:141] libmachine: (gvisor-836132) DBG | domain gvisor-836132 has defined MAC address 52:54:00:99:c5:e3 in network mk-gvisor-836132
	I0331 18:04:53.054031   33494 main.go:141] libmachine: (gvisor-836132) DBG | unable to find current IP address of domain gvisor-836132 in network mk-gvisor-836132
	I0331 18:04:53.054059   33494 main.go:141] libmachine: (gvisor-836132) DBG | I0331 18:04:53.053989   33603 retry.go:31] will retry after 1.334972483s: waiting for machine to come up
	I0331 18:04:50.765070   32536 pod_ready.go:102] pod "etcd-pause-939189" in "kube-system" namespace has status "Ready":"False"
	I0331 18:04:52.921656   32536 pod_ready.go:102] pod "etcd-pause-939189" in "kube-system" namespace has status "Ready":"False"
	I0331 18:04:53.421399   32536 pod_ready.go:92] pod "etcd-pause-939189" in "kube-system" namespace has status "Ready":"True"
	I0331 18:04:53.421429   32536 pod_ready.go:81] duration metric: took 7.01965493s waiting for pod "etcd-pause-939189" in "kube-system" namespace to be "Ready" ...
	I0331 18:04:53.421441   32536 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-939189" in "kube-system" namespace to be "Ready" ...
	I0331 18:04:53.429675   32536 pod_ready.go:92] pod "kube-apiserver-pause-939189" in "kube-system" namespace has status "Ready":"True"
	I0331 18:04:53.429697   32536 pod_ready.go:81] duration metric: took 8.249323ms waiting for pod "kube-apiserver-pause-939189" in "kube-system" namespace to be "Ready" ...
	I0331 18:04:53.429708   32536 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-939189" in "kube-system" namespace to be "Ready" ...
	I0331 18:04:53.438704   32536 pod_ready.go:92] pod "kube-controller-manager-pause-939189" in "kube-system" namespace has status "Ready":"True"
	I0331 18:04:53.438720   32536 pod_ready.go:81] duration metric: took 9.003572ms waiting for pod "kube-controller-manager-pause-939189" in "kube-system" namespace to be "Ready" ...
	I0331 18:04:53.438731   32536 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jg8p6" in "kube-system" namespace to be "Ready" ...
	I0331 18:04:53.446519   32536 pod_ready.go:92] pod "kube-proxy-jg8p6" in "kube-system" namespace has status "Ready":"True"
	I0331 18:04:53.446534   32536 pod_ready.go:81] duration metric: took 7.795873ms waiting for pod "kube-proxy-jg8p6" in "kube-system" namespace to be "Ready" ...
	I0331 18:04:53.446545   32536 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-939189" in "kube-system" namespace to be "Ready" ...
	I0331 18:04:53.451227   32536 pod_ready.go:92] pod "kube-scheduler-pause-939189" in "kube-system" namespace has status "Ready":"True"
	I0331 18:04:53.451242   32536 pod_ready.go:81] duration metric: took 4.691126ms waiting for pod "kube-scheduler-pause-939189" in "kube-system" namespace to be "Ready" ...
	I0331 18:04:53.451250   32536 pod_ready.go:38] duration metric: took 12.105730649s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0331 18:04:53.451272   32536 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0331 18:04:53.463906   32536 ops.go:34] apiserver oom_adj: -16
	I0331 18:04:53.463925   32536 kubeadm.go:637] restartCluster took 55.388480099s
	I0331 18:04:53.463933   32536 kubeadm.go:403] StartCluster complete in 55.545742823s
	I0331 18:04:53.463952   32536 settings.go:142] acquiring lock: {Name:mk54cf97b6d1b5b12dec7aad9dd26d754e62bcd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0331 18:04:53.464032   32536 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16144-3494/kubeconfig
	I0331 18:04:53.464825   32536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16144-3494/kubeconfig: {Name:mk0e63c10dbce63578041d9db05c951415a42011 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0331 18:04:53.465096   32536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0331 18:04:53.465243   32536 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
	I0331 18:04:53.465315   32536 config.go:182] Loaded profile config "pause-939189": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.3
	I0331 18:04:53.465367   32536 cache.go:107] acquiring lock: {Name:mka2cf660dd4d542e74644eb9f55d9546287db85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0331 18:04:53.465432   32536 cache.go:115] /home/jenkins/minikube-integration/16144-3494/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 exists
	I0331 18:04:53.468377   32536 out.go:177] * Enabled addons: 
	I0331 18:04:53.465440   32536 cache.go:96] cache image "gcr.io/k8s-minikube/gvisor-addon:2" -> "/home/jenkins/minikube-integration/16144-3494/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2" took 77.875µs
	I0331 18:04:53.465689   32536 kapi.go:59] client config for pause-939189: &rest.Config{Host:"https://192.168.39.142:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16144-3494/.minikube/profiles/pause-939189/client.crt", KeyFile:"/home/jenkins/minikube-integration/16144-3494/.minikube/profiles/pause-939189/client.key", CAFile:"/home/jenkins/minikube-integration/16144-3494/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x192bee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0331 18:04:53.469869   32536 addons.go:499] enable addons completed in 4.62348ms: enabled=[]
	I0331 18:04:53.469887   32536 cache.go:80] save to tar file gcr.io/k8s-minikube/gvisor-addon:2 -> /home/jenkins/minikube-integration/16144-3494/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 succeeded
	I0331 18:04:53.469904   32536 cache.go:87] Successfully saved all images to host disk.
	I0331 18:04:53.470079   32536 config.go:182] Loaded profile config "pause-939189": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.3
	I0331 18:04:53.470390   32536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0331 18:04:53.470414   32536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0331 18:04:53.472779   32536 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-939189" context rescaled to 1 replicas
	I0331 18:04:53.472816   32536 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.142 Port:8443 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0331 18:04:53.474464   32536 out.go:177] * Verifying Kubernetes components...
	I0331 18:04:49.689822   33276 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.064390662s)
	I0331 18:04:49.689845   33276 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0331 18:04:49.730226   33276 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0331 18:04:49.740534   33276 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2628 bytes)
	I0331 18:04:49.759896   33276 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0331 18:04:49.892044   33276 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0331 18:04:52.833806   33276 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.941720773s)
	I0331 18:04:52.833863   33276 start.go:481] detecting cgroup driver to use...
	I0331 18:04:52.833984   33276 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0331 18:04:52.856132   33276 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0331 18:04:52.867005   33276 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0331 18:04:52.875838   33276 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0331 18:04:52.875899   33276 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0331 18:04:52.885209   33276 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0331 18:04:52.895294   33276 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0331 18:04:52.906080   33276 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0331 18:04:52.916021   33276 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0331 18:04:52.927401   33276 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0331 18:04:52.936940   33276 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0331 18:04:52.945127   33276 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0331 18:04:52.953052   33276 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0331 18:04:53.053440   33276 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0331 18:04:53.071425   33276 start.go:481] detecting cgroup driver to use...
	I0331 18:04:53.071501   33276 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0331 18:04:53.090019   33276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0331 18:04:53.104446   33276 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0331 18:04:53.123957   33276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0331 18:04:53.139648   33276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0331 18:04:53.155612   33276 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0331 18:04:53.186101   33276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0331 18:04:53.202708   33276 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0331 18:04:53.222722   33276 ssh_runner.go:195] Run: which cri-dockerd
	I0331 18:04:53.227094   33276 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0331 18:04:53.236406   33276 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0331 18:04:53.252225   33276 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0331 18:04:53.363704   33276 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0331 18:04:53.479794   33276 docker.go:538] configuring docker to use "cgroupfs" as cgroup driver...
	I0331 18:04:53.479826   33276 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0331 18:04:53.502900   33276 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0331 18:04:53.633618   33276 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0331 18:04:53.475854   32536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0331 18:04:53.487310   32536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41011
	I0331 18:04:53.487911   32536 main.go:141] libmachine: () Calling .GetVersion
	I0331 18:04:53.488552   32536 main.go:141] libmachine: Using API Version  1
	I0331 18:04:53.488581   32536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0331 18:04:53.488899   32536 main.go:141] libmachine: () Calling .GetMachineName
	I0331 18:04:53.489075   32536 main.go:141] libmachine: (pause-939189) Calling .GetState
	I0331 18:04:53.491520   32536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0331 18:04:53.491556   32536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0331 18:04:53.508789   32536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36597
	I0331 18:04:53.509289   32536 main.go:141] libmachine: () Calling .GetVersion
	I0331 18:04:53.509835   32536 main.go:141] libmachine: Using API Version  1
	I0331 18:04:53.509862   32536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0331 18:04:53.510320   32536 main.go:141] libmachine: () Calling .GetMachineName
	I0331 18:04:53.510605   32536 main.go:141] libmachine: (pause-939189) Calling .DriverName
	I0331 18:04:53.510836   32536 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0331 18:04:53.510866   32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHHostname
	I0331 18:04:53.514674   32536 main.go:141] libmachine: (pause-939189) DBG | domain pause-939189 has defined MAC address 52:54:00:80:0d:fe in network mk-pause-939189
	I0331 18:04:53.515275   32536 main.go:141] libmachine: (pause-939189) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:0d:fe", ip: ""} in network mk-pause-939189: {Iface:virbr1 ExpiryTime:2023-03-31 19:02:23 +0000 UTC Type:0 Mac:52:54:00:80:0d:fe Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:pause-939189 Clientid:01:52:54:00:80:0d:fe}
	I0331 18:04:53.515296   32536 main.go:141] libmachine: (pause-939189) DBG | domain pause-939189 has defined IP address 192.168.39.142 and MAC address 52:54:00:80:0d:fe in network mk-pause-939189
	I0331 18:04:53.515586   32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHPort
	I0331 18:04:53.515793   32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHKeyPath
	I0331 18:04:53.515965   32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHUsername
	I0331 18:04:53.516121   32536 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16144-3494/.minikube/machines/pause-939189/id_rsa Username:docker}
	I0331 18:04:53.632891   32536 node_ready.go:35] waiting up to 6m0s for node "pause-939189" to be "Ready" ...
	I0331 18:04:53.633113   32536 start.go:889] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0331 18:04:53.637258   32536 node_ready.go:49] node "pause-939189" has status "Ready":"True"
	I0331 18:04:53.637275   32536 node_ready.go:38] duration metric: took 4.35255ms waiting for node "pause-939189" to be "Ready" ...
	I0331 18:04:53.637285   32536 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0331 18:04:53.668203   32536 docker.go:639] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.26.3
	registry.k8s.io/kube-scheduler:v1.26.3
	registry.k8s.io/kube-controller-manager:v1.26.3
	registry.k8s.io/kube-proxy:v1.26.3
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0331 18:04:53.668226   32536 cache_images.go:84] Images are preloaded, skipping loading
	I0331 18:04:53.668235   32536 cache_images.go:262] succeeded pushing to: pause-939189
	I0331 18:04:53.668239   32536 cache_images.go:263] failed pushing to: 
	I0331 18:04:53.668267   32536 main.go:141] libmachine: Making call to close driver server
	I0331 18:04:53.668284   32536 main.go:141] libmachine: (pause-939189) Calling .Close
	I0331 18:04:53.668596   32536 main.go:141] libmachine: Successfully made call to close driver server
	I0331 18:04:53.668613   32536 main.go:141] libmachine: Making call to close connection to plugin binary
	I0331 18:04:53.668625   32536 main.go:141] libmachine: (pause-939189) DBG | Closing plugin on server side
	I0331 18:04:53.668625   32536 main.go:141] libmachine: Making call to close driver server
	I0331 18:04:53.668641   32536 main.go:141] libmachine: (pause-939189) Calling .Close
	I0331 18:04:53.668916   32536 main.go:141] libmachine: (pause-939189) DBG | Closing plugin on server side
	I0331 18:04:53.668922   32536 main.go:141] libmachine: Successfully made call to close driver server
	I0331 18:04:53.668942   32536 main.go:141] libmachine: Making call to close connection to plugin binary
	I0331 18:04:53.821124   32536 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-hcrtc" in "kube-system" namespace to be "Ready" ...
	I0331 18:04:54.218332   32536 pod_ready.go:92] pod "coredns-787d4945fb-hcrtc" in "kube-system" namespace has status "Ready":"True"
	I0331 18:04:54.218358   32536 pod_ready.go:81] duration metric: took 397.210316ms waiting for pod "coredns-787d4945fb-hcrtc" in "kube-system" namespace to be "Ready" ...
	I0331 18:04:54.218367   32536 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-939189" in "kube-system" namespace to be "Ready" ...
	I0331 18:04:54.618607   32536 pod_ready.go:92] pod "etcd-pause-939189" in "kube-system" namespace has status "Ready":"True"
	I0331 18:04:54.618631   32536 pod_ready.go:81] duration metric: took 400.255347ms waiting for pod "etcd-pause-939189" in "kube-system" namespace to be "Ready" ...
	I0331 18:04:54.618640   32536 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-939189" in "kube-system" namespace to be "Ready" ...
	I0331 18:04:55.019356   32536 pod_ready.go:92] pod "kube-apiserver-pause-939189" in "kube-system" namespace has status "Ready":"True"
	I0331 18:04:55.019378   32536 pod_ready.go:81] duration metric: took 400.731414ms waiting for pod "kube-apiserver-pause-939189" in "kube-system" namespace to be "Ready" ...
	I0331 18:04:55.019393   32536 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-939189" in "kube-system" namespace to be "Ready" ...
	I0331 18:04:55.420085   32536 pod_ready.go:92] pod "kube-controller-manager-pause-939189" in "kube-system" namespace has status "Ready":"True"
	I0331 18:04:55.420114   32536 pod_ready.go:81] duration metric: took 400.711919ms waiting for pod "kube-controller-manager-pause-939189" in "kube-system" namespace to be "Ready" ...
	I0331 18:04:55.420130   32536 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jg8p6" in "kube-system" namespace to be "Ready" ...
	I0331 18:04:55.015443   33276 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.381792307s)
	I0331 18:04:55.015525   33276 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0331 18:04:55.133415   33276 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0331 18:04:55.243506   33276 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0331 18:04:55.356452   33276 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0331 18:04:55.477055   33276 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0331 18:04:55.493533   33276 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0331 18:04:55.611643   33276 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0331 18:04:55.707141   33276 start.go:528] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0331 18:04:55.707200   33276 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0331 18:04:55.713403   33276 start.go:549] Will wait 60s for crictl version
	I0331 18:04:55.713474   33276 ssh_runner.go:195] Run: which crictl
	I0331 18:04:55.718338   33276 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0331 18:04:55.774128   33276 start.go:565] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.23
	RuntimeApiVersion:  v1alpha2
	I0331 18:04:55.774203   33276 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0331 18:04:55.810277   33276 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0331 18:04:55.819685   32536 pod_ready.go:92] pod "kube-proxy-jg8p6" in "kube-system" namespace has status "Ready":"True"
	I0331 18:04:55.819705   32536 pod_ready.go:81] duration metric: took 399.567435ms waiting for pod "kube-proxy-jg8p6" in "kube-system" namespace to be "Ready" ...
	I0331 18:04:55.819719   32536 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-939189" in "kube-system" namespace to be "Ready" ...
	I0331 18:04:56.219488   32536 pod_ready.go:92] pod "kube-scheduler-pause-939189" in "kube-system" namespace has status "Ready":"True"
	I0331 18:04:56.219513   32536 pod_ready.go:81] duration metric: took 399.783789ms waiting for pod "kube-scheduler-pause-939189" in "kube-system" namespace to be "Ready" ...
	I0331 18:04:56.219524   32536 pod_ready.go:38] duration metric: took 2.582225755s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0331 18:04:56.219550   32536 api_server.go:51] waiting for apiserver process to appear ...
	I0331 18:04:56.219595   32536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 18:04:56.240919   32536 api_server.go:71] duration metric: took 2.768070005s to wait for apiserver process to appear ...
	I0331 18:04:56.240947   32536 api_server.go:87] waiting for apiserver healthz status ...
	I0331 18:04:56.240961   32536 api_server.go:252] Checking apiserver healthz at https://192.168.39.142:8443/healthz ...
	I0331 18:04:56.247401   32536 api_server.go:278] https://192.168.39.142:8443/healthz returned 200:
	ok
	I0331 18:04:56.248689   32536 api_server.go:140] control plane version: v1.26.3
	I0331 18:04:56.248709   32536 api_server.go:130] duration metric: took 7.754551ms to wait for apiserver health ...
	I0331 18:04:56.248718   32536 system_pods.go:43] waiting for kube-system pods to appear ...
	I0331 18:04:56.422125   32536 system_pods.go:59] 6 kube-system pods found
	I0331 18:04:56.422151   32536 system_pods.go:61] "coredns-787d4945fb-hcrtc" [1e78e1f9-1a39-4c02-a4e9-51e5b268d077] Running
	I0331 18:04:56.422159   32536 system_pods.go:61] "etcd-pause-939189" [cdc68c44-f3a4-4655-9818-48f074e8e376] Running
	I0331 18:04:56.422166   32536 system_pods.go:61] "kube-apiserver-pause-939189" [c40b018d-97b2-4cdf-9edc-e1473d304c55] Running
	I0331 18:04:56.422174   32536 system_pods.go:61] "kube-controller-manager-pause-939189" [69a62fcf-5db8-4354-aa08-ee5d2209a0ed] Running
	I0331 18:04:56.422181   32536 system_pods.go:61] "kube-proxy-jg8p6" [dd3378f4-948b-4bec-abd3-ea9dc35d3259] Running
	I0331 18:04:56.422187   32536 system_pods.go:61] "kube-scheduler-pause-939189" [b51eb2f5-8508-46f2-8c02-652ad1a69a1e] Running
	I0331 18:04:56.422193   32536 system_pods.go:74] duration metric: took 173.469145ms to wait for pod list to return data ...
	I0331 18:04:56.422202   32536 default_sa.go:34] waiting for default service account to be created ...
	I0331 18:04:56.618165   32536 default_sa.go:45] found service account: "default"
	I0331 18:04:56.618190   32536 default_sa.go:55] duration metric: took 195.978567ms for default service account to be created ...
	I0331 18:04:56.618200   32536 system_pods.go:116] waiting for k8s-apps to be running ...
	I0331 18:04:56.823045   32536 system_pods.go:86] 6 kube-system pods found
	I0331 18:04:56.823082   32536 system_pods.go:89] "coredns-787d4945fb-hcrtc" [1e78e1f9-1a39-4c02-a4e9-51e5b268d077] Running
	I0331 18:04:56.823092   32536 system_pods.go:89] "etcd-pause-939189" [cdc68c44-f3a4-4655-9818-48f074e8e376] Running
	I0331 18:04:56.823099   32536 system_pods.go:89] "kube-apiserver-pause-939189" [c40b018d-97b2-4cdf-9edc-e1473d304c55] Running
	I0331 18:04:56.823107   32536 system_pods.go:89] "kube-controller-manager-pause-939189" [69a62fcf-5db8-4354-aa08-ee5d2209a0ed] Running
	I0331 18:04:56.823113   32536 system_pods.go:89] "kube-proxy-jg8p6" [dd3378f4-948b-4bec-abd3-ea9dc35d3259] Running
	I0331 18:04:56.823120   32536 system_pods.go:89] "kube-scheduler-pause-939189" [b51eb2f5-8508-46f2-8c02-652ad1a69a1e] Running
	I0331 18:04:56.823129   32536 system_pods.go:126] duration metric: took 204.923041ms to wait for k8s-apps to be running ...
	I0331 18:04:56.823144   32536 system_svc.go:44] waiting for kubelet service to be running ....
	I0331 18:04:56.823194   32536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0331 18:04:56.843108   32536 system_svc.go:56] duration metric: took 19.952106ms WaitForService to wait for kubelet.
	I0331 18:04:56.843157   32536 kubeadm.go:578] duration metric: took 3.370313636s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0331 18:04:56.843181   32536 node_conditions.go:102] verifying NodePressure condition ...
	I0331 18:04:57.019150   32536 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0331 18:04:57.019178   32536 node_conditions.go:123] node cpu capacity is 2
	I0331 18:04:57.019188   32536 node_conditions.go:105] duration metric: took 176.00176ms to run NodePressure ...
	I0331 18:04:57.019201   32536 start.go:228] waiting for startup goroutines ...
	I0331 18:04:57.019209   32536 start.go:233] waiting for cluster config update ...
	I0331 18:04:57.019219   32536 start.go:242] writing updated cluster config ...
	I0331 18:04:57.019587   32536 ssh_runner.go:195] Run: rm -f paused
	I0331 18:04:57.094738   32536 start.go:557] kubectl: 1.26.3, cluster: 1.26.3 (minor skew: 0)
	I0331 18:04:57.097707   32536 out.go:177] * Done! kubectl is now configured to use "pause-939189" cluster and "default" namespace by default
	I0331 18:04:52.670594   33820 preload.go:132] Checking if preload exists for k8s version v0.0.0 and runtime docker
	W0331 18:04:52.706864   33820 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v0.0.0/preloaded-images-k8s-v18-v0.0.0-docker-overlay2-amd64.tar.lz4 status code: 404
	I0331 18:04:52.707029   33820 profile.go:148] Saving config to /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/NoKubernetes-746317/config.json ...
	I0331 18:04:52.707063   33820 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/NoKubernetes-746317/config.json: {Name:mkc819cfb6c45ebbebd0d82f4a0be54fd6cd98e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0331 18:04:52.707228   33820 cache.go:193] Successfully downloaded all kic artifacts
	I0331 18:04:52.707251   33820 start.go:364] acquiring machines lock for NoKubernetes-746317: {Name:mkfdc5208de17d93700ea90324b4f36214eab469 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0331 18:04:55.847800   33276 out.go:204] * Preparing Kubernetes v1.26.3 on Docker 20.10.23 ...
	I0331 18:04:55.847864   33276 main.go:141] libmachine: (auto-347180) Calling .GetIP
	I0331 18:04:55.850787   33276 main.go:141] libmachine: (auto-347180) DBG | domain auto-347180 has defined MAC address 52:54:00:61:01:e7 in network mk-auto-347180
	I0331 18:04:55.851207   33276 main.go:141] libmachine: (auto-347180) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:01:e7", ip: ""} in network mk-auto-347180: {Iface:virbr3 ExpiryTime:2023-03-31 19:04:35 +0000 UTC Type:0 Mac:52:54:00:61:01:e7 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:auto-347180 Clientid:01:52:54:00:61:01:e7}
	I0331 18:04:55.851239   33276 main.go:141] libmachine: (auto-347180) DBG | domain auto-347180 has defined IP address 192.168.72.199 and MAC address 52:54:00:61:01:e7 in network mk-auto-347180
	I0331 18:04:55.851415   33276 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0331 18:04:55.855857   33276 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0331 18:04:55.868328   33276 localpath.go:92] copying /home/jenkins/minikube-integration/16144-3494/.minikube/client.crt -> /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/client.crt
	I0331 18:04:55.868487   33276 localpath.go:117] copying /home/jenkins/minikube-integration/16144-3494/.minikube/client.key -> /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/client.key
	I0331 18:04:55.868617   33276 preload.go:132] Checking if preload exists for k8s version v1.26.3 and runtime docker
	I0331 18:04:55.868673   33276 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0331 18:04:55.896702   33276 docker.go:639] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.3
	registry.k8s.io/kube-scheduler:v1.26.3
	registry.k8s.io/kube-controller-manager:v1.26.3
	registry.k8s.io/kube-proxy:v1.26.3
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0331 18:04:55.896733   33276 docker.go:569] Images already preloaded, skipping extraction
	I0331 18:04:55.896797   33276 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0331 18:04:55.924955   33276 docker.go:639] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.3
	registry.k8s.io/kube-controller-manager:v1.26.3
	registry.k8s.io/kube-scheduler:v1.26.3
	registry.k8s.io/kube-proxy:v1.26.3
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0331 18:04:55.924992   33276 cache_images.go:84] Images are preloaded, skipping loading
	I0331 18:04:55.925053   33276 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0331 18:04:55.965144   33276 cni.go:84] Creating CNI manager for ""
	I0331 18:04:55.965172   33276 cni.go:157] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0331 18:04:55.965185   33276 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0331 18:04:55.965205   33276 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.199 APIServerPort:8443 KubernetesVersion:v1.26.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-347180 NodeName:auto-347180 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.199"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.199 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0331 18:04:55.965393   33276 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.199
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "auto-347180"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.199
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.199"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0331 18:04:55.965514   33276 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=auto-347180 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.199
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.3 ClusterName:auto-347180 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0331 18:04:55.965613   33276 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.3
	I0331 18:04:55.975410   33276 binaries.go:44] Found k8s binaries, skipping transfer
	I0331 18:04:55.975480   33276 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0331 18:04:55.984755   33276 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I0331 18:04:56.009787   33276 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0331 18:04:56.031312   33276 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2093 bytes)
	I0331 18:04:56.049714   33276 ssh_runner.go:195] Run: grep 192.168.72.199	control-plane.minikube.internal$ /etc/hosts
	I0331 18:04:56.054641   33276 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.199	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0331 18:04:56.067876   33276 certs.go:56] Setting up /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180 for IP: 192.168.72.199
	I0331 18:04:56.067912   33276 certs.go:186] acquiring lock for shared ca certs: {Name:mk5b2b979756b4a682c5be81dc53f006bb9a7a01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0331 18:04:56.068110   33276 certs.go:195] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16144-3494/.minikube/ca.key
	I0331 18:04:56.068167   33276 certs.go:195] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16144-3494/.minikube/proxy-client-ca.key
	I0331 18:04:56.068278   33276 certs.go:311] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/client.key
	I0331 18:04:56.068308   33276 certs.go:315] generating minikube signed cert: /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/apiserver.key.217b3e23
	I0331 18:04:56.068325   33276 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/apiserver.crt.217b3e23 with IP's: [192.168.72.199 10.96.0.1 127.0.0.1 10.0.0.1]
	I0331 18:04:56.209196   33276 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/apiserver.crt.217b3e23 ...
	I0331 18:04:56.209224   33276 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/apiserver.crt.217b3e23: {Name:mk3e4cd47c6706ab2f578dfdd08d80ebdd3c15fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0331 18:04:56.209429   33276 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/apiserver.key.217b3e23 ...
	I0331 18:04:56.209445   33276 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/apiserver.key.217b3e23: {Name:mk009817638857b2bbdb66530e778b671a0003f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0331 18:04:56.209547   33276 certs.go:333] copying /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/apiserver.crt.217b3e23 -> /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/apiserver.crt
	I0331 18:04:56.209609   33276 certs.go:337] copying /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/apiserver.key.217b3e23 -> /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/apiserver.key
	I0331 18:04:56.209656   33276 certs.go:315] generating aggregator signed cert: /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/proxy-client.key
	I0331 18:04:56.209668   33276 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/proxy-client.crt with IP's: []
	I0331 18:04:56.257382   33276 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/proxy-client.crt ...
	I0331 18:04:56.257405   33276 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/proxy-client.crt: {Name:mk082703dadea0ea3251f4202bbf72399caa3a43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0331 18:04:56.257583   33276 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/proxy-client.key ...
	I0331 18:04:56.257595   33276 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/proxy-client.key: {Name:mk4b72bffb94c8b27e86fc5f7b2d38af391fe2ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0331 18:04:56.257819   33276 certs.go:401] found cert: /home/jenkins/minikube-integration/16144-3494/.minikube/certs/home/jenkins/minikube-integration/16144-3494/.minikube/certs/10540.pem (1338 bytes)
	W0331 18:04:56.257876   33276 certs.go:397] ignoring /home/jenkins/minikube-integration/16144-3494/.minikube/certs/home/jenkins/minikube-integration/16144-3494/.minikube/certs/10540_empty.pem, impossibly tiny 0 bytes
	I0331 18:04:56.257892   33276 certs.go:401] found cert: /home/jenkins/minikube-integration/16144-3494/.minikube/certs/home/jenkins/minikube-integration/16144-3494/.minikube/certs/ca-key.pem (1675 bytes)
	I0331 18:04:56.257924   33276 certs.go:401] found cert: /home/jenkins/minikube-integration/16144-3494/.minikube/certs/home/jenkins/minikube-integration/16144-3494/.minikube/certs/ca.pem (1078 bytes)
	I0331 18:04:56.257959   33276 certs.go:401] found cert: /home/jenkins/minikube-integration/16144-3494/.minikube/certs/home/jenkins/minikube-integration/16144-3494/.minikube/certs/cert.pem (1123 bytes)
	I0331 18:04:56.257987   33276 certs.go:401] found cert: /home/jenkins/minikube-integration/16144-3494/.minikube/certs/home/jenkins/minikube-integration/16144-3494/.minikube/certs/key.pem (1679 bytes)
	I0331 18:04:56.258026   33276 certs.go:401] found cert: /home/jenkins/minikube-integration/16144-3494/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16144-3494/.minikube/files/etc/ssl/certs/105402.pem (1708 bytes)
	I0331 18:04:56.258526   33276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0331 18:04:56.287806   33276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0331 18:04:56.314968   33276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0331 18:04:56.338082   33276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0331 18:04:56.360708   33276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16144-3494/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0331 18:04:56.390138   33276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16144-3494/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0331 18:04:56.419129   33276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16144-3494/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0331 18:04:56.447101   33276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16144-3494/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0331 18:04:56.472169   33276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16144-3494/.minikube/files/etc/ssl/certs/105402.pem --> /usr/share/ca-certificates/105402.pem (1708 bytes)
	I0331 18:04:56.498664   33276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16144-3494/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0331 18:04:56.525516   33276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16144-3494/.minikube/certs/10540.pem --> /usr/share/ca-certificates/10540.pem (1338 bytes)
	I0331 18:04:56.548806   33276 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0331 18:04:56.565642   33276 ssh_runner.go:195] Run: openssl version
	I0331 18:04:56.571067   33276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/105402.pem && ln -fs /usr/share/ca-certificates/105402.pem /etc/ssl/certs/105402.pem"
	I0331 18:04:56.580624   33276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/105402.pem
	I0331 18:04:56.585385   33276 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Mar 31 17:26 /usr/share/ca-certificates/105402.pem
	I0331 18:04:56.585449   33276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/105402.pem
	I0331 18:04:56.591662   33276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/105402.pem /etc/ssl/certs/3ec20f2e.0"
	I0331 18:04:56.602558   33276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0331 18:04:56.612933   33276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0331 18:04:56.619029   33276 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Mar 31 17:21 /usr/share/ca-certificates/minikubeCA.pem
	I0331 18:04:56.619087   33276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0331 18:04:56.626198   33276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0331 18:04:56.639266   33276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10540.pem && ln -fs /usr/share/ca-certificates/10540.pem /etc/ssl/certs/10540.pem"
	I0331 18:04:56.649914   33276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10540.pem
	I0331 18:04:56.654454   33276 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Mar 31 17:26 /usr/share/ca-certificates/10540.pem
	I0331 18:04:56.654515   33276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10540.pem
	I0331 18:04:56.661570   33276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10540.pem /etc/ssl/certs/51391683.0"
	I0331 18:04:56.671169   33276 kubeadm.go:401] StartCluster: {Name:auto-347180 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16191/minikube-v1.29.0-1680115329-16191-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.3 ClusterName:auto-347
180 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.199 Port:8443 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0331 18:04:56.671303   33276 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0331 18:04:56.695923   33276 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0331 18:04:56.705641   33276 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0331 18:04:56.715247   33276 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0331 18:04:56.724602   33276 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0331 18:04:56.724655   33276 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0331 18:04:56.783971   33276 kubeadm.go:322] [init] Using Kubernetes version: v1.26.3
	I0331 18:04:56.784098   33276 kubeadm.go:322] [preflight] Running pre-flight checks
	I0331 18:04:56.929895   33276 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0331 18:04:56.930047   33276 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0331 18:04:56.930171   33276 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0331 18:04:57.156879   33276 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	
	* 
	* ==> Docker <==
	* -- Journal begins at Fri 2023-03-31 18:02:19 UTC, ends at Fri 2023-03-31 18:04:57 UTC. --
	Mar 31 18:04:32 pause-939189 dockerd[4567]: time="2023-03-31T18:04:32.708286788Z" level=warning msg="cleaning up after shim disconnected" id=b400c024f135f7c82274f810b9ce06d15d41eb95e87b7caae02c5db9542e56db namespace=moby
	Mar 31 18:04:32 pause-939189 dockerd[4567]: time="2023-03-31T18:04:32.708340669Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 31 18:04:32 pause-939189 cri-dockerd[5345]: W0331 18:04:32.836659    5345 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	Mar 31 18:04:35 pause-939189 dockerd[4567]: time="2023-03-31T18:04:35.348379648Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 31 18:04:35 pause-939189 dockerd[4567]: time="2023-03-31T18:04:35.348500345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 31 18:04:35 pause-939189 dockerd[4567]: time="2023-03-31T18:04:35.348521902Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 31 18:04:35 pause-939189 dockerd[4567]: time="2023-03-31T18:04:35.348533652Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 31 18:04:35 pause-939189 dockerd[4567]: time="2023-03-31T18:04:35.357176945Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 31 18:04:35 pause-939189 dockerd[4567]: time="2023-03-31T18:04:35.357265075Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 31 18:04:35 pause-939189 dockerd[4567]: time="2023-03-31T18:04:35.357291341Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 31 18:04:35 pause-939189 dockerd[4567]: time="2023-03-31T18:04:35.357305204Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 31 18:04:39 pause-939189 cri-dockerd[5345]: time="2023-03-31T18:04:39Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Mar 31 18:04:40 pause-939189 dockerd[4567]: time="2023-03-31T18:04:40.947465780Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 31 18:04:40 pause-939189 dockerd[4567]: time="2023-03-31T18:04:40.947526265Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 31 18:04:40 pause-939189 dockerd[4567]: time="2023-03-31T18:04:40.947543565Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 31 18:04:40 pause-939189 dockerd[4567]: time="2023-03-31T18:04:40.947555826Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 31 18:04:40 pause-939189 dockerd[4567]: time="2023-03-31T18:04:40.953976070Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 31 18:04:40 pause-939189 dockerd[4567]: time="2023-03-31T18:04:40.954296632Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 31 18:04:40 pause-939189 dockerd[4567]: time="2023-03-31T18:04:40.954453909Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 31 18:04:40 pause-939189 dockerd[4567]: time="2023-03-31T18:04:40.954623054Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 31 18:04:41 pause-939189 cri-dockerd[5345]: time="2023-03-31T18:04:41Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/11bb612576207ce6f9fdbde8dfa7f6235a96c8d3be559f2e51d8d4b173aa4b51/resolv.conf as [nameserver 192.168.122.1]"
	Mar 31 18:04:41 pause-939189 dockerd[4567]: time="2023-03-31T18:04:41.977346347Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 31 18:04:41 pause-939189 dockerd[4567]: time="2023-03-31T18:04:41.977635522Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 31 18:04:41 pause-939189 dockerd[4567]: time="2023-03-31T18:04:41.977752683Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 31 18:04:41 pause-939189 dockerd[4567]: time="2023-03-31T18:04:41.977778301Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	1344b5c000a9d       5185b96f0becf       16 seconds ago       Running             coredns                   2                   11bb612576207
	1686d0df28f10       92ed2bec97a63       17 seconds ago       Running             kube-proxy                3                   18b52638ab7a1
	5d40b2ef4a864       5a79047369329       22 seconds ago       Running             kube-scheduler            3                   df301869b351d
	80b600760e999       fce326961ae2d       22 seconds ago       Running             etcd                      3                   1089f600d6711
	84de5d76d35ca       ce8c2293ef09c       26 seconds ago       Running             kube-controller-manager   2                   55c3c7ee9ca0a
	966b1cd3b351e       1d9b3cbae03ce       28 seconds ago       Running             kube-apiserver            2                   0afb944a4f151
	a0ad0a35a3e08       fce326961ae2d       43 seconds ago       Exited              etcd                      2                   c447bce0c8aef
	b4599f5bff86d       5a79047369329       43 seconds ago       Exited              kube-scheduler            2                   6981b4d73a6c9
	9999f58d27656       92ed2bec97a63       45 seconds ago       Exited              kube-proxy                2                   f5b35d44675c8
	b400c024f135f       5185b96f0becf       58 seconds ago       Exited              coredns                   1                   5e8b08d2a8f2f
	874fcc56f9f62       1d9b3cbae03ce       About a minute ago   Exited              kube-apiserver            1                   4045aa0f265a1
	8ace7d6c4bee4       ce8c2293ef09c       About a minute ago   Exited              kube-controller-manager   1                   b034146fe7e8c
	
	* 
	* ==> coredns [1344b5c000a9] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] 127.0.0.1:58096 - 62967 "HINFO IN 3459962459257687508.4367275231804161359. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020935271s
	
	* 
	* ==> coredns [b400c024f135] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] 127.0.0.1:42721 - 9088 "HINFO IN 8560628874867663181.8710474958470687856. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.051252273s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> describe nodes <==
	* Name:               pause-939189
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-939189
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=945b3fc45ee9ac8e1ceaffb00a71ec22c717b10e
	                    minikube.k8s.io/name=pause-939189
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_03_31T18_03_00_0700
	                    minikube.k8s.io/version=v1.29.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 31 Mar 2023 18:02:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-939189
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 31 Mar 2023 18:04:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 31 Mar 2023 18:04:39 +0000   Fri, 31 Mar 2023 18:02:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 31 Mar 2023 18:04:39 +0000   Fri, 31 Mar 2023 18:02:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 31 Mar 2023 18:04:39 +0000   Fri, 31 Mar 2023 18:02:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 31 Mar 2023 18:04:39 +0000   Fri, 31 Mar 2023 18:03:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.142
	  Hostname:    pause-939189
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	System Info:
	  Machine ID:                 ff362cba6608463787695edbccc756af
	  System UUID:                ff362cba-6608-4637-8769-5edbccc756af
	  Boot ID:                    8edfbfeb-24ea-46a9-b4c5-e31dc2d1b4c1
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.23
	  Kubelet Version:            v1.26.3
	  Kube-Proxy Version:         v1.26.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-787d4945fb-hcrtc                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     106s
	  kube-system                 etcd-pause-939189                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         118s
	  kube-system                 kube-apiserver-pause-939189             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m1s
	  kube-system                 kube-controller-manager-pause-939189    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         118s
	  kube-system                 kube-proxy-jg8p6                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         106s
	  kube-system                 kube-scheduler-pause-939189             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         118s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 102s                 kube-proxy       
	  Normal  Starting                 16s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m8s (x4 over 2m8s)  kubelet          Node pause-939189 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m8s (x4 over 2m8s)  kubelet          Node pause-939189 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m8s (x4 over 2m8s)  kubelet          Node pause-939189 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     118s                 kubelet          Node pause-939189 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  118s                 kubelet          Node pause-939189 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    118s                 kubelet          Node pause-939189 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  118s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                118s                 kubelet          Node pause-939189 status is now: NodeReady
	  Normal  Starting                 118s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           107s                 node-controller  Node pause-939189 event: Registered Node pause-939189 in Controller
	  Normal  Starting                 24s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  24s (x8 over 24s)    kubelet          Node pause-939189 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24s (x8 over 24s)    kubelet          Node pause-939189 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24s (x7 over 24s)    kubelet          Node pause-939189 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6s                   node-controller  Node pause-939189 event: Registered Node pause-939189 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.422579] systemd-fstab-generator[930]: Ignoring "noauto" for root device
	[  +0.164482] systemd-fstab-generator[941]: Ignoring "noauto" for root device
	[  +0.161981] systemd-fstab-generator[954]: Ignoring "noauto" for root device
	[  +1.600832] systemd-fstab-generator[1102]: Ignoring "noauto" for root device
	[  +0.111337] systemd-fstab-generator[1113]: Ignoring "noauto" for root device
	[  +0.130984] systemd-fstab-generator[1124]: Ignoring "noauto" for root device
	[  +0.124503] systemd-fstab-generator[1135]: Ignoring "noauto" for root device
	[  +0.132321] systemd-fstab-generator[1149]: Ignoring "noauto" for root device
	[  +4.351511] systemd-fstab-generator[1397]: Ignoring "noauto" for root device
	[  +0.702241] kauditd_printk_skb: 68 callbacks suppressed
	[  +9.105596] systemd-fstab-generator[2340]: Ignoring "noauto" for root device
	[Mar31 18:03] kauditd_printk_skb: 8 callbacks suppressed
	[  +5.099775] kauditd_printk_skb: 28 callbacks suppressed
	[ +22.013414] systemd-fstab-generator[3826]: Ignoring "noauto" for root device
	[  +0.416829] systemd-fstab-generator[3860]: Ignoring "noauto" for root device
	[  +0.213956] systemd-fstab-generator[3871]: Ignoring "noauto" for root device
	[  +0.230022] systemd-fstab-generator[3884]: Ignoring "noauto" for root device
	[  +5.258034] kauditd_printk_skb: 4 callbacks suppressed
	[  +6.349775] systemd-fstab-generator[4980]: Ignoring "noauto" for root device
	[  +0.138234] systemd-fstab-generator[4991]: Ignoring "noauto" for root device
	[  +0.169296] systemd-fstab-generator[5007]: Ignoring "noauto" for root device
	[  +0.160988] systemd-fstab-generator[5056]: Ignoring "noauto" for root device
	[  +0.226282] systemd-fstab-generator[5127]: Ignoring "noauto" for root device
	[  +4.119790] kauditd_printk_skb: 37 callbacks suppressed
	[Mar31 18:04] systemd-fstab-generator[7161]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [80b600760e99] <==
	* {"level":"warn","ts":"2023-03-31T18:04:50.753Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-03-31T18:04:50.314Z","time spent":"439.098122ms","remote":"127.0.0.1:52040","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":6620,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-pause-939189\" mod_revision:461 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-pause-939189\" value_size:6558 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-pause-939189\" > >"}
	{"level":"warn","ts":"2023-03-31T18:04:50.754Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"212.221672ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/default\" ","response":"range_response_count:1 size:341"}
	{"level":"info","ts":"2023-03-31T18:04:50.754Z","caller":"traceutil/trace.go:171","msg":"trace[1823280090] range","detail":"{range_begin:/registry/namespaces/default; range_end:; response_count:1; response_revision:462; }","duration":"212.395865ms","start":"2023-03-31T18:04:50.542Z","end":"2023-03-31T18:04:50.754Z","steps":["trace[1823280090] 'agreement among raft nodes before linearized reading'  (duration: 212.138709ms)"],"step_count":1}
	{"level":"warn","ts":"2023-03-31T18:04:50.754Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"341.184734ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-pause-939189\" ","response":"range_response_count:1 size:5480"}
	{"level":"info","ts":"2023-03-31T18:04:50.754Z","caller":"traceutil/trace.go:171","msg":"trace[1705229913] range","detail":"{range_begin:/registry/pods/kube-system/etcd-pause-939189; range_end:; response_count:1; response_revision:462; }","duration":"341.208794ms","start":"2023-03-31T18:04:50.413Z","end":"2023-03-31T18:04:50.754Z","steps":["trace[1705229913] 'agreement among raft nodes before linearized reading'  (duration: 341.128291ms)"],"step_count":1}
	{"level":"warn","ts":"2023-03-31T18:04:50.754Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-03-31T18:04:50.413Z","time spent":"341.245678ms","remote":"127.0.0.1:52040","response type":"/etcdserverpb.KV/Range","request count":0,"request size":46,"response count":1,"response size":5504,"request content":"key:\"/registry/pods/kube-system/etcd-pause-939189\" "}
	{"level":"warn","ts":"2023-03-31T18:04:51.208Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"258.605359ms","expected-duration":"100ms","prefix":"","request":"header:<ID:839788533735404794 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:0ba78738d7beb4f9>","response":"size:41"}
	{"level":"info","ts":"2023-03-31T18:04:51.209Z","caller":"traceutil/trace.go:171","msg":"trace[2128410207] linearizableReadLoop","detail":"{readStateIndex:500; appliedIndex:499; }","duration":"296.499176ms","start":"2023-03-31T18:04:50.912Z","end":"2023-03-31T18:04:51.209Z","steps":["trace[2128410207] 'read index received'  (duration: 37.740315ms)","trace[2128410207] 'applied index is now lower than readState.Index'  (duration: 258.757557ms)"],"step_count":2}
	{"level":"warn","ts":"2023-03-31T18:04:51.209Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"296.647465ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-pause-939189\" ","response":"range_response_count:1 size:5480"}
	{"level":"info","ts":"2023-03-31T18:04:51.209Z","caller":"traceutil/trace.go:171","msg":"trace[478960090] range","detail":"{range_begin:/registry/pods/kube-system/etcd-pause-939189; range_end:; response_count:1; response_revision:462; }","duration":"296.673964ms","start":"2023-03-31T18:04:50.912Z","end":"2023-03-31T18:04:51.209Z","steps":["trace[478960090] 'agreement among raft nodes before linearized reading'  (duration: 296.561324ms)"],"step_count":1}
	{"level":"warn","ts":"2023-03-31T18:04:51.209Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-03-31T18:04:50.762Z","time spent":"447.271669ms","remote":"127.0.0.1:52016","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"info","ts":"2023-03-31T18:04:52.108Z","caller":"traceutil/trace.go:171","msg":"trace[1920228168] linearizableReadLoop","detail":"{readStateIndex:502; appliedIndex:501; }","duration":"165.267816ms","start":"2023-03-31T18:04:51.943Z","end":"2023-03-31T18:04:52.108Z","steps":["trace[1920228168] 'read index received'  (duration: 165.022721ms)","trace[1920228168] 'applied index is now lower than readState.Index'  (duration: 244.277µs)"],"step_count":2}
	{"level":"info","ts":"2023-03-31T18:04:52.110Z","caller":"traceutil/trace.go:171","msg":"trace[1687701317] transaction","detail":"{read_only:false; response_revision:464; number_of_response:1; }","duration":"176.741493ms","start":"2023-03-31T18:04:51.933Z","end":"2023-03-31T18:04:52.110Z","steps":["trace[1687701317] 'process raft request'  (duration: 175.168227ms)"],"step_count":1}
	{"level":"warn","ts":"2023-03-31T18:04:52.112Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"168.992818ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/deployment-controller\" ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2023-03-31T18:04:52.112Z","caller":"traceutil/trace.go:171","msg":"trace[1794617064] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/deployment-controller; range_end:; response_count:1; response_revision:464; }","duration":"169.069396ms","start":"2023-03-31T18:04:51.943Z","end":"2023-03-31T18:04:52.112Z","steps":["trace[1794617064] 'agreement among raft nodes before linearized reading'  (duration: 165.391165ms)"],"step_count":1}
	{"level":"warn","ts":"2023-03-31T18:04:52.293Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"123.74239ms","expected-duration":"100ms","prefix":"","request":"header:<ID:839788533735404827 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/endpointslices/kube-system/kube-dns-f9qtf\" mod_revision:390 > success:<request_put:<key:\"/registry/endpointslices/kube-system/kube-dns-f9qtf\" value_size:1239 >> failure:<request_range:<key:\"/registry/endpointslices/kube-system/kube-dns-f9qtf\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-03-31T18:04:52.294Z","caller":"traceutil/trace.go:171","msg":"trace[280136650] transaction","detail":"{read_only:false; response_revision:465; number_of_response:1; }","duration":"168.841202ms","start":"2023-03-31T18:04:52.125Z","end":"2023-03-31T18:04:52.294Z","steps":["trace[280136650] 'process raft request'  (duration: 44.44482ms)","trace[280136650] 'compare'  (duration: 123.644413ms)"],"step_count":2}
	{"level":"info","ts":"2023-03-31T18:04:52.297Z","caller":"traceutil/trace.go:171","msg":"trace[929692375] transaction","detail":"{read_only:false; response_revision:467; number_of_response:1; }","duration":"142.41231ms","start":"2023-03-31T18:04:52.154Z","end":"2023-03-31T18:04:52.297Z","steps":["trace[929692375] 'process raft request'  (duration: 142.313651ms)"],"step_count":1}
	{"level":"info","ts":"2023-03-31T18:04:52.298Z","caller":"traceutil/trace.go:171","msg":"trace[1640521255] transaction","detail":"{read_only:false; response_revision:466; number_of_response:1; }","duration":"169.933179ms","start":"2023-03-31T18:04:52.128Z","end":"2023-03-31T18:04:52.298Z","steps":["trace[1640521255] 'process raft request'  (duration: 168.949367ms)"],"step_count":1}
	{"level":"info","ts":"2023-03-31T18:04:52.583Z","caller":"traceutil/trace.go:171","msg":"trace[1929288585] linearizableReadLoop","detail":"{readStateIndex:506; appliedIndex:505; }","duration":"170.211991ms","start":"2023-03-31T18:04:52.412Z","end":"2023-03-31T18:04:52.583Z","steps":["trace[1929288585] 'read index received'  (duration: 128.7627ms)","trace[1929288585] 'applied index is now lower than readState.Index'  (duration: 41.448583ms)"],"step_count":2}
	{"level":"info","ts":"2023-03-31T18:04:52.583Z","caller":"traceutil/trace.go:171","msg":"trace[47408908] transaction","detail":"{read_only:false; response_revision:468; number_of_response:1; }","duration":"258.75753ms","start":"2023-03-31T18:04:52.324Z","end":"2023-03-31T18:04:52.583Z","steps":["trace[47408908] 'process raft request'  (duration: 216.820717ms)","trace[47408908] 'compare'  (duration: 41.26405ms)"],"step_count":2}
	{"level":"warn","ts":"2023-03-31T18:04:52.584Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"171.519483ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-pause-939189\" ","response":"range_response_count:1 size:5480"}
	{"level":"info","ts":"2023-03-31T18:04:52.584Z","caller":"traceutil/trace.go:171","msg":"trace[1263506650] range","detail":"{range_begin:/registry/pods/kube-system/etcd-pause-939189; range_end:; response_count:1; response_revision:468; }","duration":"171.595141ms","start":"2023-03-31T18:04:52.412Z","end":"2023-03-31T18:04:52.584Z","steps":["trace[1263506650] 'agreement among raft nodes before linearized reading'  (duration: 171.444814ms)"],"step_count":1}
	{"level":"warn","ts":"2023-03-31T18:04:52.584Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"150.725144ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-03-31T18:04:52.585Z","caller":"traceutil/trace.go:171","msg":"trace[213446996] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:468; }","duration":"150.795214ms","start":"2023-03-31T18:04:52.434Z","end":"2023-03-31T18:04:52.584Z","steps":["trace[213446996] 'agreement among raft nodes before linearized reading'  (duration: 150.635678ms)"],"step_count":1}
	
	* 
	* ==> etcd [a0ad0a35a3e0] <==
	* {"level":"info","ts":"2023-03-31T18:04:14.959Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.39.142:2380"}
	{"level":"info","ts":"2023-03-31T18:04:14.959Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.39.142:2380"}
	{"level":"info","ts":"2023-03-31T18:04:14.959Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-03-31T18:04:14.962Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"d7a5d3e20a6b0ba7","initial-advertise-peer-urls":["https://192.168.39.142:2380"],"listen-peer-urls":["https://192.168.39.142:2380"],"advertise-client-urls":["https://192.168.39.142:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.142:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-03-31T18:04:14.962Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-03-31T18:04:15.334Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d7a5d3e20a6b0ba7 is starting a new election at term 3"}
	{"level":"info","ts":"2023-03-31T18:04:15.334Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d7a5d3e20a6b0ba7 became pre-candidate at term 3"}
	{"level":"info","ts":"2023-03-31T18:04:15.334Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d7a5d3e20a6b0ba7 received MsgPreVoteResp from d7a5d3e20a6b0ba7 at term 3"}
	{"level":"info","ts":"2023-03-31T18:04:15.334Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d7a5d3e20a6b0ba7 became candidate at term 4"}
	{"level":"info","ts":"2023-03-31T18:04:15.334Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d7a5d3e20a6b0ba7 received MsgVoteResp from d7a5d3e20a6b0ba7 at term 4"}
	{"level":"info","ts":"2023-03-31T18:04:15.334Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d7a5d3e20a6b0ba7 became leader at term 4"}
	{"level":"info","ts":"2023-03-31T18:04:15.334Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d7a5d3e20a6b0ba7 elected leader d7a5d3e20a6b0ba7 at term 4"}
	{"level":"info","ts":"2023-03-31T18:04:15.341Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"d7a5d3e20a6b0ba7","local-member-attributes":"{Name:pause-939189 ClientURLs:[https://192.168.39.142:2379]}","request-path":"/0/members/d7a5d3e20a6b0ba7/attributes","cluster-id":"f7d6b5428c0c9dc0","publish-timeout":"7s"}
	{"level":"info","ts":"2023-03-31T18:04:15.341Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-03-31T18:04:15.342Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-03-31T18:04:15.342Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-03-31T18:04:15.343Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.39.142:2379"}
	{"level":"info","ts":"2023-03-31T18:04:15.347Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-03-31T18:04:15.347Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-03-31T18:04:27.719Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-03-31T18:04:27.719Z","caller":"embed/etcd.go:373","msg":"closing etcd server","name":"pause-939189","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.142:2380"],"advertise-client-urls":["https://192.168.39.142:2379"]}
	{"level":"info","ts":"2023-03-31T18:04:27.723Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"d7a5d3e20a6b0ba7","current-leader-member-id":"d7a5d3e20a6b0ba7"}
	{"level":"info","ts":"2023-03-31T18:04:27.727Z","caller":"embed/etcd.go:568","msg":"stopping serving peer traffic","address":"192.168.39.142:2380"}
	{"level":"info","ts":"2023-03-31T18:04:27.728Z","caller":"embed/etcd.go:573","msg":"stopped serving peer traffic","address":"192.168.39.142:2380"}
	{"level":"info","ts":"2023-03-31T18:04:27.728Z","caller":"embed/etcd.go:375","msg":"closed etcd server","name":"pause-939189","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.142:2380"],"advertise-client-urls":["https://192.168.39.142:2379"]}
	
	* 
	* ==> kernel <==
	*  18:04:58 up 2 min,  0 users,  load average: 2.10, 1.02, 0.39
	Linux pause-939189 5.10.57 #1 SMP Wed Mar 29 23:38:32 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [874fcc56f9f6] <==
	* W0331 18:04:09.094355       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0331 18:04:10.570941       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0331 18:04:14.640331       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	E0331 18:04:19.527936       1 run.go:74] "command failed" err="context deadline exceeded"
	
	* 
	* ==> kube-apiserver [966b1cd3b351] <==
	* I0331 18:04:39.222688       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0331 18:04:39.205515       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0331 18:04:39.314255       1 shared_informer.go:280] Caches are synced for cluster_authentication_trust_controller
	I0331 18:04:39.316506       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0331 18:04:39.317062       1 shared_informer.go:280] Caches are synced for configmaps
	I0331 18:04:39.318946       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0331 18:04:39.323304       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0331 18:04:39.338800       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0331 18:04:39.338942       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0331 18:04:39.339358       1 shared_informer.go:280] Caches are synced for crd-autoregister
	I0331 18:04:39.397474       1 shared_informer.go:280] Caches are synced for node_authorizer
	I0331 18:04:39.418720       1 cache.go:39] Caches are synced for autoregister controller
	I0331 18:04:39.958002       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0331 18:04:40.221547       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0331 18:04:41.099152       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0331 18:04:41.124185       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0331 18:04:41.212998       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0331 18:04:41.267710       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0331 18:04:41.286487       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0331 18:04:51.284113       1 trace.go:219] Trace[2025945949]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/192.168.39.142,type:*v1.Endpoints,resource:apiServerIPInfo (31-Mar-2023 18:04:50.760) (total time: 523ms):
	Trace[2025945949]: ---"Transaction prepared" 449ms (18:04:51.210)
	Trace[2025945949]: ---"Txn call completed" 73ms (18:04:51.284)
	Trace[2025945949]: [523.960493ms] [523.960493ms] END
	I0331 18:04:51.929561       1 controller.go:615] quota admission added evaluator for: endpoints
	I0331 18:04:52.124697       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-controller-manager [84de5d76d35c] <==
	* W0331 18:04:52.065251       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="pause-939189" does not exist
	I0331 18:04:52.067639       1 shared_informer.go:280] Caches are synced for resource quota
	I0331 18:04:52.076620       1 shared_informer.go:280] Caches are synced for attach detach
	I0331 18:04:52.084564       1 shared_informer.go:280] Caches are synced for daemon sets
	I0331 18:04:52.087592       1 shared_informer.go:280] Caches are synced for endpoint_slice
	I0331 18:04:52.100706       1 shared_informer.go:280] Caches are synced for node
	I0331 18:04:52.100905       1 range_allocator.go:167] Sending events to api server.
	I0331 18:04:52.101097       1 range_allocator.go:171] Starting range CIDR allocator
	I0331 18:04:52.101132       1 shared_informer.go:273] Waiting for caches to sync for cidrallocator
	I0331 18:04:52.101145       1 shared_informer.go:280] Caches are synced for cidrallocator
	I0331 18:04:52.109512       1 shared_informer.go:280] Caches are synced for GC
	I0331 18:04:52.110949       1 shared_informer.go:280] Caches are synced for endpoint_slice_mirroring
	I0331 18:04:52.111820       1 shared_informer.go:280] Caches are synced for resource quota
	I0331 18:04:52.151113       1 shared_informer.go:280] Caches are synced for taint
	I0331 18:04:52.151644       1 shared_informer.go:280] Caches are synced for TTL
	I0331 18:04:52.151696       1 node_lifecycle_controller.go:1438] Initializing eviction metric for zone: 
	W0331 18:04:52.152283       1 node_lifecycle_controller.go:1053] Missing timestamp for Node pause-939189. Assuming now as a timestamp.
	I0331 18:04:52.152564       1 node_lifecycle_controller.go:1254] Controller detected that zone  is now in state Normal.
	I0331 18:04:52.152806       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I0331 18:04:52.153068       1 taint_manager.go:211] "Sending events to api server"
	I0331 18:04:52.154301       1 event.go:294] "Event occurred" object="pause-939189" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-939189 event: Registered Node pause-939189 in Controller"
	I0331 18:04:52.157444       1 shared_informer.go:280] Caches are synced for persistent volume
	I0331 18:04:52.506059       1 shared_informer.go:280] Caches are synced for garbage collector
	I0331 18:04:52.506479       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0331 18:04:52.533136       1 shared_informer.go:280] Caches are synced for garbage collector
	
	* 
	* ==> kube-controller-manager [8ace7d6c4bee] <==
	* I0331 18:03:59.321744       1 serving.go:348] Generated self-signed cert in-memory
	I0331 18:03:59.853937       1 controllermanager.go:182] Version: v1.26.3
	I0331 18:03:59.853990       1 controllermanager.go:184] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0331 18:03:59.855979       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0331 18:03:59.856127       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0331 18:03:59.856668       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0331 18:03:59.856802       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
	F0331 18:04:20.535428       1 controllermanager.go:228] error building controller context: failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get "https://192.168.39.142:8443/healthz": dial tcp 192.168.39.142:8443: connect: connection refused
	
	* 
	* ==> kube-proxy [1686d0df28f1] <==
	* I0331 18:04:41.170371       1 node.go:163] Successfully retrieved node IP: 192.168.39.142
	I0331 18:04:41.170425       1 server_others.go:109] "Detected node IP" address="192.168.39.142"
	I0331 18:04:41.170450       1 server_others.go:535] "Using iptables proxy"
	I0331 18:04:41.271349       1 server_others.go:170] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0331 18:04:41.271390       1 server_others.go:176] "Using iptables Proxier"
	I0331 18:04:41.271446       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0331 18:04:41.271898       1 server.go:655] "Version info" version="v1.26.3"
	I0331 18:04:41.271978       1 server.go:657] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0331 18:04:41.276289       1 config.go:317] "Starting service config controller"
	I0331 18:04:41.276432       1 shared_informer.go:273] Waiting for caches to sync for service config
	I0331 18:04:41.276461       1 config.go:226] "Starting endpoint slice config controller"
	I0331 18:04:41.276465       1 shared_informer.go:273] Waiting for caches to sync for endpoint slice config
	I0331 18:04:41.277123       1 config.go:444] "Starting node config controller"
	I0331 18:04:41.277131       1 shared_informer.go:273] Waiting for caches to sync for node config
	I0331 18:04:41.376963       1 shared_informer.go:280] Caches are synced for endpoint slice config
	I0331 18:04:41.377002       1 shared_informer.go:280] Caches are synced for service config
	I0331 18:04:41.377248       1 shared_informer.go:280] Caches are synced for node config
	
	* 
	* ==> kube-proxy [9999f58d2765] <==
	* E0331 18:04:20.538153       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-939189": dial tcp 192.168.39.142:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.142:56890->192.168.39.142:8443: read: connection reset by peer
	E0331 18:04:21.665395       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-939189": dial tcp 192.168.39.142:8443: connect: connection refused
	E0331 18:04:23.920058       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-939189": dial tcp 192.168.39.142:8443: connect: connection refused
	
	* 
	* ==> kube-scheduler [5d40b2ef4a86] <==
	* I0331 18:04:36.274158       1 serving.go:348] Generated self-signed cert in-memory
	W0331 18:04:39.233042       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0331 18:04:39.233351       1 authentication.go:349] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0331 18:04:39.233637       1 authentication.go:350] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0331 18:04:39.233672       1 authentication.go:351] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0331 18:04:39.306413       1 server.go:152] "Starting Kubernetes Scheduler" version="v1.26.3"
	I0331 18:04:39.306462       1 server.go:154] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0331 18:04:39.308017       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0331 18:04:39.308563       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0331 18:04:39.308610       1 shared_informer.go:273] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0331 18:04:39.308627       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0331 18:04:39.409801       1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [b4599f5bff86] <==
	* E0331 18:04:24.036070       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.142:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.142:8443: connect: connection refused
	W0331 18:04:24.450493       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: Get "https://192.168.39.142:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.142:8443: connect: connection refused
	E0331 18:04:24.450560       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.142:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.142:8443: connect: connection refused
	W0331 18:04:24.681951       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.142:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.142:8443: connect: connection refused
	E0331 18:04:24.682039       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.142:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.142:8443: connect: connection refused
	W0331 18:04:24.877656       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://192.168.39.142:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.142:8443: connect: connection refused
	E0331 18:04:24.878016       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.142:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.142:8443: connect: connection refused
	W0331 18:04:24.900986       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://192.168.39.142:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.142:8443: connect: connection refused
	E0331 18:04:24.901338       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.142:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.142:8443: connect: connection refused
	W0331 18:04:24.987726       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: Get "https://192.168.39.142:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.142:8443: connect: connection refused
	E0331 18:04:24.988045       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.142:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.142:8443: connect: connection refused
	W0331 18:04:25.024394       1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get "https://192.168.39.142:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.142:8443: connect: connection refused
	E0331 18:04:25.024478       1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.142:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.142:8443: connect: connection refused
	W0331 18:04:25.132338       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: Get "https://192.168.39.142:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.142:8443: connect: connection refused
	E0331 18:04:25.132589       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.142:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.142:8443: connect: connection refused
	W0331 18:04:26.745186       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: Get "https://192.168.39.142:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.142:8443: connect: connection refused
	E0331 18:04:26.745273       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.142:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.142:8443: connect: connection refused
	W0331 18:04:26.909186       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.142:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.142:8443: connect: connection refused
	E0331 18:04:26.909259       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.142:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.142:8443: connect: connection refused
	W0331 18:04:27.588118       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: Get "https://192.168.39.142:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.142:8443: connect: connection refused
	E0331 18:04:27.588180       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.142:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.142:8443: connect: connection refused
	I0331 18:04:27.668688       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0331 18:04:27.668780       1 shared_informer.go:276] unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0331 18:04:27.668791       1 configmap_cafile_content.go:210] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0331 18:04:27.669106       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Fri 2023-03-31 18:02:19 UTC, ends at Fri 2023-03-31 18:04:58 UTC. --
	Mar 31 18:04:35 pause-939189 kubelet[7167]: I0331 18:04:35.060753    7167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbeb3c050b5e21453f641a818794f61-kubeconfig\") pod \"kube-controller-manager-pause-939189\" (UID: \"5bbeb3c050b5e21453f641a818794f61\") " pod="kube-system/kube-controller-manager-pause-939189"
	Mar 31 18:04:35 pause-939189 kubelet[7167]: I0331 18:04:35.060806    7167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbeb3c050b5e21453f641a818794f61-usr-share-ca-certificates\") pod \"kube-controller-manager-pause-939189\" (UID: \"5bbeb3c050b5e21453f641a818794f61\") " pod="kube-system/kube-controller-manager-pause-939189"
	Mar 31 18:04:35 pause-939189 kubelet[7167]: I0331 18:04:35.173548    7167 scope.go:115] "RemoveContainer" containerID="a0ad0a35a3e08720ef402cc44066aa6415d3380188ccf061278936b018f9164f"
	Mar 31 18:04:35 pause-939189 kubelet[7167]: I0331 18:04:35.206303    7167 scope.go:115] "RemoveContainer" containerID="b4599f5bff86da254627b8fa420dbfa886e737fe4bf8140cd8ac5ec3f882a89e"
	Mar 31 18:04:35 pause-939189 kubelet[7167]: I0331 18:04:35.871491    7167 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5b35d44675c82be44631616cd6f0a52aa1dc911e88776342deacc611d359e35"
	Mar 31 18:04:39 pause-939189 kubelet[7167]: I0331 18:04:39.403200    7167 kubelet_node_status.go:108] "Node was previously registered" node="pause-939189"
	Mar 31 18:04:39 pause-939189 kubelet[7167]: I0331 18:04:39.403314    7167 kubelet_node_status.go:73] "Successfully registered node" node="pause-939189"
	Mar 31 18:04:39 pause-939189 kubelet[7167]: I0331 18:04:39.406119    7167 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Mar 31 18:04:39 pause-939189 kubelet[7167]: I0331 18:04:39.407529    7167 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Mar 31 18:04:39 pause-939189 kubelet[7167]: I0331 18:04:39.534534    7167 apiserver.go:52] "Watching apiserver"
	Mar 31 18:04:39 pause-939189 kubelet[7167]: I0331 18:04:39.537600    7167 topology_manager.go:210] "Topology Admit Handler"
	Mar 31 18:04:39 pause-939189 kubelet[7167]: I0331 18:04:39.537920    7167 topology_manager.go:210] "Topology Admit Handler"
	Mar 31 18:04:39 pause-939189 kubelet[7167]: I0331 18:04:39.561329    7167 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Mar 31 18:04:39 pause-939189 kubelet[7167]: I0331 18:04:39.592448    7167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dd3378f4-948b-4bec-abd3-ea9dc35d3259-xtables-lock\") pod \"kube-proxy-jg8p6\" (UID: \"dd3378f4-948b-4bec-abd3-ea9dc35d3259\") " pod="kube-system/kube-proxy-jg8p6"
	Mar 31 18:04:39 pause-939189 kubelet[7167]: I0331 18:04:39.592793    7167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1e78e1f9-1a39-4c02-a4e9-51e5b268d077-config-volume\") pod \"coredns-787d4945fb-hcrtc\" (UID: \"1e78e1f9-1a39-4c02-a4e9-51e5b268d077\") " pod="kube-system/coredns-787d4945fb-hcrtc"
	Mar 31 18:04:39 pause-939189 kubelet[7167]: I0331 18:04:39.593000    7167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxlhf\" (UniqueName: \"kubernetes.io/projected/dd3378f4-948b-4bec-abd3-ea9dc35d3259-kube-api-access-nxlhf\") pod \"kube-proxy-jg8p6\" (UID: \"dd3378f4-948b-4bec-abd3-ea9dc35d3259\") " pod="kube-system/kube-proxy-jg8p6"
	Mar 31 18:04:39 pause-939189 kubelet[7167]: I0331 18:04:39.593182    7167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dd3378f4-948b-4bec-abd3-ea9dc35d3259-lib-modules\") pod \"kube-proxy-jg8p6\" (UID: \"dd3378f4-948b-4bec-abd3-ea9dc35d3259\") " pod="kube-system/kube-proxy-jg8p6"
	Mar 31 18:04:39 pause-939189 kubelet[7167]: I0331 18:04:39.593344    7167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n26cp\" (UniqueName: \"kubernetes.io/projected/1e78e1f9-1a39-4c02-a4e9-51e5b268d077-kube-api-access-n26cp\") pod \"coredns-787d4945fb-hcrtc\" (UID: \"1e78e1f9-1a39-4c02-a4e9-51e5b268d077\") " pod="kube-system/coredns-787d4945fb-hcrtc"
	Mar 31 18:04:39 pause-939189 kubelet[7167]: I0331 18:04:39.593511    7167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/dd3378f4-948b-4bec-abd3-ea9dc35d3259-kube-proxy\") pod \"kube-proxy-jg8p6\" (UID: \"dd3378f4-948b-4bec-abd3-ea9dc35d3259\") " pod="kube-system/kube-proxy-jg8p6"
	Mar 31 18:04:39 pause-939189 kubelet[7167]: I0331 18:04:39.593631    7167 reconciler.go:41] "Reconciler: start to sync state"
	Mar 31 18:04:40 pause-939189 kubelet[7167]: I0331 18:04:40.739124    7167 scope.go:115] "RemoveContainer" containerID="9999f58d276569aa698d96721d17b94fa850bf4239d5df11ce622ad76d4c9c20"
	Mar 31 18:04:40 pause-939189 kubelet[7167]: I0331 18:04:40.900279    7167 request.go:690] Waited for 1.195299342s due to client-side throttling, not priority and fairness, request: PATCH:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/etcd-pause-939189/status
	Mar 31 18:04:41 pause-939189 kubelet[7167]: I0331 18:04:41.825587    7167 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11bb612576207ce6f9fdbde8dfa7f6235a96c8d3be559f2e51d8d4b173aa4b51"
	Mar 31 18:04:43 pause-939189 kubelet[7167]: I0331 18:04:43.869081    7167 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness"
	Mar 31 18:04:45 pause-939189 kubelet[7167]: I0331 18:04:45.920720    7167 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-939189 -n pause-939189
helpers_test.go:261: (dbg) Run:  kubectl --context pause-939189 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-939189 -n pause-939189
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-939189 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-939189 logs -n 25: (1.269869644s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                      Args                      |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p stopped-upgrade-202435                      | stopped-upgrade-202435    | jenkins | v1.29.0 | 31 Mar 23 18:00 UTC | 31 Mar 23 18:02 UTC |
	|         | --memory=2200                                  |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                         |                           |         |         |                     |                     |
	|         | --driver=kvm2                                  |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-075589                   | kubernetes-upgrade-075589 | jenkins | v1.29.0 | 31 Mar 23 18:01 UTC |                     |
	|         | --memory=2200                                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                   |                           |         |         |                     |                     |
	|         | --driver=kvm2                                  |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-075589                   | kubernetes-upgrade-075589 | jenkins | v1.29.0 | 31 Mar 23 18:01 UTC | 31 Mar 23 18:02 UTC |
	|         | --memory=2200                                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.0-rc.0              |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                         |                           |         |         |                     |                     |
	|         | --driver=kvm2                                  |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-549601                      | cert-expiration-549601    | jenkins | v1.29.0 | 31 Mar 23 18:01 UTC | 31 Mar 23 18:01 UTC |
	| start   | -p pause-939189 --memory=2048                  | pause-939189              | jenkins | v1.29.0 | 31 Mar 23 18:01 UTC | 31 Mar 23 18:03 UTC |
	|         | --install-addons=false                         |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2                       |                           |         |         |                     |                     |
	| cache   | gvisor-836132 cache add                        | gvisor-836132             | jenkins | v1.29.0 | 31 Mar 23 18:01 UTC | 31 Mar 23 18:02 UTC |
	|         | gcr.io/k8s-minikube/gvisor-addon:2             |                           |         |         |                     |                     |
	| addons  | gvisor-836132 addons enable                    | gvisor-836132             | jenkins | v1.29.0 | 31 Mar 23 18:02 UTC | 31 Mar 23 18:02 UTC |
	|         | gvisor                                         |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-202435                      | stopped-upgrade-202435    | jenkins | v1.29.0 | 31 Mar 23 18:02 UTC | 31 Mar 23 18:02 UTC |
	| start   | -p force-systemd-env-066234                    | force-systemd-env-066234  | jenkins | v1.29.0 | 31 Mar 23 18:02 UTC | 31 Mar 23 18:03 UTC |
	|         | --memory=2048                                  |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                         |                           |         |         |                     |                     |
	|         | --driver=kvm2                                  |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-075589                   | kubernetes-upgrade-075589 | jenkins | v1.29.0 | 31 Mar 23 18:02 UTC | 31 Mar 23 18:02 UTC |
	| start   | -p cert-options-885841                         | cert-options-885841       | jenkins | v1.29.0 | 31 Mar 23 18:02 UTC | 31 Mar 23 18:04 UTC |
	|         | --memory=2048                                  |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                      |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                  |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost                    |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com               |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                          |                           |         |         |                     |                     |
	|         | --driver=kvm2                                  |                           |         |         |                     |                     |
	| stop    | -p gvisor-836132                               | gvisor-836132             | jenkins | v1.29.0 | 31 Mar 23 18:02 UTC | 31 Mar 23 18:04 UTC |
	| start   | -p pause-939189                                | pause-939189              | jenkins | v1.29.0 | 31 Mar 23 18:03 UTC | 31 Mar 23 18:04 UTC |
	|         | --alsologtostderr -v=1                         |                           |         |         |                     |                     |
	|         | --driver=kvm2                                  |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-066234                       | force-systemd-env-066234  | jenkins | v1.29.0 | 31 Mar 23 18:03 UTC | 31 Mar 23 18:03 UTC |
	|         | ssh docker info --format                       |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                              |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-066234                    | force-systemd-env-066234  | jenkins | v1.29.0 | 31 Mar 23 18:03 UTC | 31 Mar 23 18:03 UTC |
	| start   | -p NoKubernetes-746317                         | NoKubernetes-746317       | jenkins | v1.29.0 | 31 Mar 23 18:03 UTC |                     |
	|         | --no-kubernetes                                |                           |         |         |                     |                     |
	|         | --kubernetes-version=1.20                      |                           |         |         |                     |                     |
	|         | --driver=kvm2                                  |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-746317                         | NoKubernetes-746317       | jenkins | v1.29.0 | 31 Mar 23 18:03 UTC |                     |
	|         | --driver=kvm2                                  |                           |         |         |                     |                     |
	| ssh     | cert-options-885841 ssh                        | cert-options-885841       | jenkins | v1.29.0 | 31 Mar 23 18:04 UTC | 31 Mar 23 18:04 UTC |
	|         | openssl x509 -text -noout -in                  |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt          |                           |         |         |                     |                     |
	| ssh     | -p cert-options-885841 -- sudo                 | cert-options-885841       | jenkins | v1.29.0 | 31 Mar 23 18:04 UTC | 31 Mar 23 18:04 UTC |
	|         | cat /etc/kubernetes/admin.conf                 |                           |         |         |                     |                     |
	| delete  | -p cert-options-885841                         | cert-options-885841       | jenkins | v1.29.0 | 31 Mar 23 18:04 UTC | 31 Mar 23 18:04 UTC |
	| start   | -p auto-347180 --memory=3072                   | auto-347180               | jenkins | v1.29.0 | 31 Mar 23 18:04 UTC |                     |
	|         | --alsologtostderr --wait=true                  |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                             |                           |         |         |                     |                     |
	|         | --driver=kvm2                                  |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-746317                         | NoKubernetes-746317       | jenkins | v1.29.0 | 31 Mar 23 18:04 UTC | 31 Mar 23 18:04 UTC |
	|         | --no-kubernetes --driver=kvm2                  |                           |         |         |                     |                     |
	| start   | -p gvisor-836132 --memory=2200                 | gvisor-836132             | jenkins | v1.29.0 | 31 Mar 23 18:04 UTC |                     |
	|         | --container-runtime=containerd --docker-opt    |                           |         |         |                     |                     |
	|         | containerd=/var/run/containerd/containerd.sock |                           |         |         |                     |                     |
	|         | --driver=kvm2                                  |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-746317                         | NoKubernetes-746317       | jenkins | v1.29.0 | 31 Mar 23 18:04 UTC | 31 Mar 23 18:04 UTC |
	| start   | -p NoKubernetes-746317                         | NoKubernetes-746317       | jenkins | v1.29.0 | 31 Mar 23 18:04 UTC |                     |
	|         | --no-kubernetes --driver=kvm2                  |                           |         |         |                     |                     |
	|---------|------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/03/31 18:04:52
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.20.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0331 18:04:52.112989   33820 out.go:296] Setting OutFile to fd 1 ...
	I0331 18:04:52.113170   33820 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0331 18:04:52.113174   33820 out.go:309] Setting ErrFile to fd 2...
	I0331 18:04:52.113180   33820 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0331 18:04:52.113343   33820 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16144-3494/.minikube/bin
	I0331 18:04:52.114025   33820 out.go:303] Setting JSON to false
	I0331 18:04:52.115095   33820 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":2843,"bootTime":1680283049,"procs":255,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1031-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0331 18:04:52.115161   33820 start.go:135] virtualization: kvm guest
	I0331 18:04:52.202763   33820 out.go:177] * [NoKubernetes-746317] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	I0331 18:04:52.295981   33820 out.go:177]   - MINIKUBE_LOCATION=16144
	I0331 18:04:52.295891   33820 notify.go:220] Checking for updates...
	I0331 18:04:52.419505   33820 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0331 18:04:52.544450   33820 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16144-3494/kubeconfig
	I0331 18:04:52.604388   33820 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16144-3494/.minikube
	I0331 18:04:52.606360   33820 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0331 18:04:52.608233   33820 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0331 18:04:52.610384   33820 config.go:182] Loaded profile config "auto-347180": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.3
	I0331 18:04:52.610538   33820 config.go:182] Loaded profile config "gvisor-836132": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.26.3
	I0331 18:04:52.610724   33820 config.go:182] Loaded profile config "pause-939189": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.3
	I0331 18:04:52.610745   33820 start.go:1732] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0331 18:04:52.610778   33820 driver.go:365] Setting default libvirt URI to qemu:///system
	I0331 18:04:52.649175   33820 out.go:177] * Using the kvm2 driver based on user configuration
	I0331 18:04:52.650741   33820 start.go:295] selected driver: kvm2
	I0331 18:04:52.650750   33820 start.go:859] validating driver "kvm2" against <nil>
	I0331 18:04:52.650762   33820 start.go:870] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0331 18:04:52.651120   33820 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0331 18:04:52.651207   33820 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16144-3494/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0331 18:04:52.665942   33820 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.29.0
	I0331 18:04:52.665977   33820 start.go:1732] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0331 18:04:52.665987   33820 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0331 18:04:52.666616   33820 start_flags.go:386] Using suggested 6000MB memory alloc based on sys=32101MB, container=0MB
	I0331 18:04:52.666788   33820 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0331 18:04:52.666808   33820 cni.go:84] Creating CNI manager for ""
	I0331 18:04:52.666818   33820 cni.go:157] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0331 18:04:52.666825   33820 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0331 18:04:52.666832   33820 start_flags.go:319] config:
	{Name:NoKubernetes-746317 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.3 ClusterName:NoKubernetes-746317 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Netw
orkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0331 18:04:52.666906   33820 start.go:1732] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0331 18:04:52.666977   33820 iso.go:125] acquiring lock: {Name:mk48583bcdf05c8e72651ed56790356a32c028b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0331 18:04:52.669123   33820 out.go:177] * Starting minikube without Kubernetes in cluster NoKubernetes-746317
	I0331 18:04:48.155281   33494 main.go:141] libmachine: (gvisor-836132) DBG | domain gvisor-836132 has defined MAC address 52:54:00:99:c5:e3 in network mk-gvisor-836132
	I0331 18:04:48.155871   33494 main.go:141] libmachine: (gvisor-836132) DBG | unable to find current IP address of domain gvisor-836132 in network mk-gvisor-836132
	I0331 18:04:48.155896   33494 main.go:141] libmachine: (gvisor-836132) DBG | I0331 18:04:48.155813   33603 retry.go:31] will retry after 283.128145ms: waiting for machine to come up
	I0331 18:04:48.440401   33494 main.go:141] libmachine: (gvisor-836132) DBG | domain gvisor-836132 has defined MAC address 52:54:00:99:c5:e3 in network mk-gvisor-836132
	I0331 18:04:48.440902   33494 main.go:141] libmachine: (gvisor-836132) DBG | unable to find current IP address of domain gvisor-836132 in network mk-gvisor-836132
	I0331 18:04:48.440924   33494 main.go:141] libmachine: (gvisor-836132) DBG | I0331 18:04:48.440860   33603 retry.go:31] will retry after 410.682274ms: waiting for machine to come up
	I0331 18:04:48.853565   33494 main.go:141] libmachine: (gvisor-836132) DBG | domain gvisor-836132 has defined MAC address 52:54:00:99:c5:e3 in network mk-gvisor-836132
	I0331 18:04:48.854037   33494 main.go:141] libmachine: (gvisor-836132) DBG | unable to find current IP address of domain gvisor-836132 in network mk-gvisor-836132
	I0331 18:04:48.854052   33494 main.go:141] libmachine: (gvisor-836132) DBG | I0331 18:04:48.854000   33603 retry.go:31] will retry after 497.486632ms: waiting for machine to come up
	I0331 18:04:49.353711   33494 main.go:141] libmachine: (gvisor-836132) DBG | domain gvisor-836132 has defined MAC address 52:54:00:99:c5:e3 in network mk-gvisor-836132
	I0331 18:04:49.354221   33494 main.go:141] libmachine: (gvisor-836132) DBG | unable to find current IP address of domain gvisor-836132 in network mk-gvisor-836132
	I0331 18:04:49.354243   33494 main.go:141] libmachine: (gvisor-836132) DBG | I0331 18:04:49.354178   33603 retry.go:31] will retry after 611.052328ms: waiting for machine to come up
	I0331 18:04:49.967240   33494 main.go:141] libmachine: (gvisor-836132) DBG | domain gvisor-836132 has defined MAC address 52:54:00:99:c5:e3 in network mk-gvisor-836132
	I0331 18:04:50.040539   33494 main.go:141] libmachine: (gvisor-836132) DBG | unable to find current IP address of domain gvisor-836132 in network mk-gvisor-836132
	I0331 18:04:50.040577   33494 main.go:141] libmachine: (gvisor-836132) DBG | I0331 18:04:50.040409   33603 retry.go:31] will retry after 763.986572ms: waiting for machine to come up
	I0331 18:04:50.876927   33494 main.go:141] libmachine: (gvisor-836132) DBG | domain gvisor-836132 has defined MAC address 52:54:00:99:c5:e3 in network mk-gvisor-836132
	I0331 18:04:50.877366   33494 main.go:141] libmachine: (gvisor-836132) DBG | unable to find current IP address of domain gvisor-836132 in network mk-gvisor-836132
	I0331 18:04:50.877457   33494 main.go:141] libmachine: (gvisor-836132) DBG | I0331 18:04:50.877308   33603 retry.go:31] will retry after 955.134484ms: waiting for machine to come up
	I0331 18:04:51.834716   33494 main.go:141] libmachine: (gvisor-836132) DBG | domain gvisor-836132 has defined MAC address 52:54:00:99:c5:e3 in network mk-gvisor-836132
	I0331 18:04:51.835256   33494 main.go:141] libmachine: (gvisor-836132) DBG | unable to find current IP address of domain gvisor-836132 in network mk-gvisor-836132
	I0331 18:04:51.835316   33494 main.go:141] libmachine: (gvisor-836132) DBG | I0331 18:04:51.835243   33603 retry.go:31] will retry after 1.216587491s: waiting for machine to come up
	I0331 18:04:53.053498   33494 main.go:141] libmachine: (gvisor-836132) DBG | domain gvisor-836132 has defined MAC address 52:54:00:99:c5:e3 in network mk-gvisor-836132
	I0331 18:04:53.054031   33494 main.go:141] libmachine: (gvisor-836132) DBG | unable to find current IP address of domain gvisor-836132 in network mk-gvisor-836132
	I0331 18:04:53.054059   33494 main.go:141] libmachine: (gvisor-836132) DBG | I0331 18:04:53.053989   33603 retry.go:31] will retry after 1.334972483s: waiting for machine to come up
	I0331 18:04:50.765070   32536 pod_ready.go:102] pod "etcd-pause-939189" in "kube-system" namespace has status "Ready":"False"
	I0331 18:04:52.921656   32536 pod_ready.go:102] pod "etcd-pause-939189" in "kube-system" namespace has status "Ready":"False"
	I0331 18:04:53.421399   32536 pod_ready.go:92] pod "etcd-pause-939189" in "kube-system" namespace has status "Ready":"True"
	I0331 18:04:53.421429   32536 pod_ready.go:81] duration metric: took 7.01965493s waiting for pod "etcd-pause-939189" in "kube-system" namespace to be "Ready" ...
	I0331 18:04:53.421441   32536 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-939189" in "kube-system" namespace to be "Ready" ...
	I0331 18:04:53.429675   32536 pod_ready.go:92] pod "kube-apiserver-pause-939189" in "kube-system" namespace has status "Ready":"True"
	I0331 18:04:53.429697   32536 pod_ready.go:81] duration metric: took 8.249323ms waiting for pod "kube-apiserver-pause-939189" in "kube-system" namespace to be "Ready" ...
	I0331 18:04:53.429708   32536 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-939189" in "kube-system" namespace to be "Ready" ...
	I0331 18:04:53.438704   32536 pod_ready.go:92] pod "kube-controller-manager-pause-939189" in "kube-system" namespace has status "Ready":"True"
	I0331 18:04:53.438720   32536 pod_ready.go:81] duration metric: took 9.003572ms waiting for pod "kube-controller-manager-pause-939189" in "kube-system" namespace to be "Ready" ...
	I0331 18:04:53.438731   32536 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jg8p6" in "kube-system" namespace to be "Ready" ...
	I0331 18:04:53.446519   32536 pod_ready.go:92] pod "kube-proxy-jg8p6" in "kube-system" namespace has status "Ready":"True"
	I0331 18:04:53.446534   32536 pod_ready.go:81] duration metric: took 7.795873ms waiting for pod "kube-proxy-jg8p6" in "kube-system" namespace to be "Ready" ...
	I0331 18:04:53.446545   32536 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-939189" in "kube-system" namespace to be "Ready" ...
	I0331 18:04:53.451227   32536 pod_ready.go:92] pod "kube-scheduler-pause-939189" in "kube-system" namespace has status "Ready":"True"
	I0331 18:04:53.451242   32536 pod_ready.go:81] duration metric: took 4.691126ms waiting for pod "kube-scheduler-pause-939189" in "kube-system" namespace to be "Ready" ...
	I0331 18:04:53.451250   32536 pod_ready.go:38] duration metric: took 12.105730649s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0331 18:04:53.451272   32536 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0331 18:04:53.463906   32536 ops.go:34] apiserver oom_adj: -16
	I0331 18:04:53.463925   32536 kubeadm.go:637] restartCluster took 55.388480099s
	I0331 18:04:53.463933   32536 kubeadm.go:403] StartCluster complete in 55.545742823s
	I0331 18:04:53.463952   32536 settings.go:142] acquiring lock: {Name:mk54cf97b6d1b5b12dec7aad9dd26d754e62bcd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0331 18:04:53.464032   32536 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16144-3494/kubeconfig
	I0331 18:04:53.464825   32536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16144-3494/kubeconfig: {Name:mk0e63c10dbce63578041d9db05c951415a42011 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0331 18:04:53.465096   32536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0331 18:04:53.465243   32536 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
	I0331 18:04:53.465315   32536 config.go:182] Loaded profile config "pause-939189": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.3
	I0331 18:04:53.465367   32536 cache.go:107] acquiring lock: {Name:mka2cf660dd4d542e74644eb9f55d9546287db85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0331 18:04:53.465432   32536 cache.go:115] /home/jenkins/minikube-integration/16144-3494/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 exists
	I0331 18:04:53.468377   32536 out.go:177] * Enabled addons: 
	I0331 18:04:53.465440   32536 cache.go:96] cache image "gcr.io/k8s-minikube/gvisor-addon:2" -> "/home/jenkins/minikube-integration/16144-3494/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2" took 77.875µs
	I0331 18:04:53.465689   32536 kapi.go:59] client config for pause-939189: &rest.Config{Host:"https://192.168.39.142:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16144-3494/.minikube/profiles/pause-939189/client.crt", KeyFile:"/home/jenkins/minikube-integration/16144-3494/.minikube/profiles/pause-939189/client.key", CAFile:"/home/jenkins/minikube-integration/16144-3494/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x192bee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0331 18:04:53.469869   32536 addons.go:499] enable addons completed in 4.62348ms: enabled=[]
	I0331 18:04:53.469887   32536 cache.go:80] save to tar file gcr.io/k8s-minikube/gvisor-addon:2 -> /home/jenkins/minikube-integration/16144-3494/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 succeeded
	I0331 18:04:53.469904   32536 cache.go:87] Successfully saved all images to host disk.
	I0331 18:04:53.470079   32536 config.go:182] Loaded profile config "pause-939189": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.3
	I0331 18:04:53.470390   32536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0331 18:04:53.470414   32536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0331 18:04:53.472779   32536 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-939189" context rescaled to 1 replicas
	I0331 18:04:53.472816   32536 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.142 Port:8443 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0331 18:04:53.474464   32536 out.go:177] * Verifying Kubernetes components...
	I0331 18:04:49.689822   33276 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.064390662s)
	I0331 18:04:49.689845   33276 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0331 18:04:49.730226   33276 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0331 18:04:49.740534   33276 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2628 bytes)
	I0331 18:04:49.759896   33276 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0331 18:04:49.892044   33276 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0331 18:04:52.833806   33276 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.941720773s)
	I0331 18:04:52.833863   33276 start.go:481] detecting cgroup driver to use...
	I0331 18:04:52.833984   33276 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0331 18:04:52.856132   33276 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0331 18:04:52.867005   33276 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0331 18:04:52.875838   33276 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0331 18:04:52.875899   33276 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0331 18:04:52.885209   33276 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0331 18:04:52.895294   33276 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0331 18:04:52.906080   33276 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0331 18:04:52.916021   33276 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0331 18:04:52.927401   33276 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0331 18:04:52.936940   33276 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0331 18:04:52.945127   33276 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0331 18:04:52.953052   33276 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0331 18:04:53.053440   33276 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0331 18:04:53.071425   33276 start.go:481] detecting cgroup driver to use...
	I0331 18:04:53.071501   33276 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0331 18:04:53.090019   33276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0331 18:04:53.104446   33276 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0331 18:04:53.123957   33276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0331 18:04:53.139648   33276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0331 18:04:53.155612   33276 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0331 18:04:53.186101   33276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0331 18:04:53.202708   33276 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0331 18:04:53.222722   33276 ssh_runner.go:195] Run: which cri-dockerd
	I0331 18:04:53.227094   33276 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0331 18:04:53.236406   33276 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0331 18:04:53.252225   33276 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0331 18:04:53.363704   33276 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0331 18:04:53.479794   33276 docker.go:538] configuring docker to use "cgroupfs" as cgroup driver...
	I0331 18:04:53.479826   33276 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0331 18:04:53.502900   33276 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0331 18:04:53.633618   33276 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0331 18:04:53.475854   32536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0331 18:04:53.487310   32536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41011
	I0331 18:04:53.487911   32536 main.go:141] libmachine: () Calling .GetVersion
	I0331 18:04:53.488552   32536 main.go:141] libmachine: Using API Version  1
	I0331 18:04:53.488581   32536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0331 18:04:53.488899   32536 main.go:141] libmachine: () Calling .GetMachineName
	I0331 18:04:53.489075   32536 main.go:141] libmachine: (pause-939189) Calling .GetState
	I0331 18:04:53.491520   32536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0331 18:04:53.491556   32536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0331 18:04:53.508789   32536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36597
	I0331 18:04:53.509289   32536 main.go:141] libmachine: () Calling .GetVersion
	I0331 18:04:53.509835   32536 main.go:141] libmachine: Using API Version  1
	I0331 18:04:53.509862   32536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0331 18:04:53.510320   32536 main.go:141] libmachine: () Calling .GetMachineName
	I0331 18:04:53.510605   32536 main.go:141] libmachine: (pause-939189) Calling .DriverName
	I0331 18:04:53.510836   32536 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0331 18:04:53.510866   32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHHostname
	I0331 18:04:53.514674   32536 main.go:141] libmachine: (pause-939189) DBG | domain pause-939189 has defined MAC address 52:54:00:80:0d:fe in network mk-pause-939189
	I0331 18:04:53.515275   32536 main.go:141] libmachine: (pause-939189) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:0d:fe", ip: ""} in network mk-pause-939189: {Iface:virbr1 ExpiryTime:2023-03-31 19:02:23 +0000 UTC Type:0 Mac:52:54:00:80:0d:fe Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:pause-939189 Clientid:01:52:54:00:80:0d:fe}
	I0331 18:04:53.515296   32536 main.go:141] libmachine: (pause-939189) DBG | domain pause-939189 has defined IP address 192.168.39.142 and MAC address 52:54:00:80:0d:fe in network mk-pause-939189
	I0331 18:04:53.515586   32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHPort
	I0331 18:04:53.515793   32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHKeyPath
	I0331 18:04:53.515965   32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHUsername
	I0331 18:04:53.516121   32536 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16144-3494/.minikube/machines/pause-939189/id_rsa Username:docker}
	I0331 18:04:53.632891   32536 node_ready.go:35] waiting up to 6m0s for node "pause-939189" to be "Ready" ...
	I0331 18:04:53.633113   32536 start.go:889] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0331 18:04:53.637258   32536 node_ready.go:49] node "pause-939189" has status "Ready":"True"
	I0331 18:04:53.637275   32536 node_ready.go:38] duration metric: took 4.35255ms waiting for node "pause-939189" to be "Ready" ...
	I0331 18:04:53.637285   32536 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0331 18:04:53.668203   32536 docker.go:639] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.26.3
	registry.k8s.io/kube-scheduler:v1.26.3
	registry.k8s.io/kube-controller-manager:v1.26.3
	registry.k8s.io/kube-proxy:v1.26.3
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0331 18:04:53.668226   32536 cache_images.go:84] Images are preloaded, skipping loading
	I0331 18:04:53.668235   32536 cache_images.go:262] succeeded pushing to: pause-939189
	I0331 18:04:53.668239   32536 cache_images.go:263] failed pushing to: 
	I0331 18:04:53.668267   32536 main.go:141] libmachine: Making call to close driver server
	I0331 18:04:53.668284   32536 main.go:141] libmachine: (pause-939189) Calling .Close
	I0331 18:04:53.668596   32536 main.go:141] libmachine: Successfully made call to close driver server
	I0331 18:04:53.668613   32536 main.go:141] libmachine: Making call to close connection to plugin binary
	I0331 18:04:53.668625   32536 main.go:141] libmachine: (pause-939189) DBG | Closing plugin on server side
	I0331 18:04:53.668625   32536 main.go:141] libmachine: Making call to close driver server
	I0331 18:04:53.668641   32536 main.go:141] libmachine: (pause-939189) Calling .Close
	I0331 18:04:53.668916   32536 main.go:141] libmachine: (pause-939189) DBG | Closing plugin on server side
	I0331 18:04:53.668922   32536 main.go:141] libmachine: Successfully made call to close driver server
	I0331 18:04:53.668942   32536 main.go:141] libmachine: Making call to close connection to plugin binary
	I0331 18:04:53.821124   32536 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-hcrtc" in "kube-system" namespace to be "Ready" ...
	I0331 18:04:54.218332   32536 pod_ready.go:92] pod "coredns-787d4945fb-hcrtc" in "kube-system" namespace has status "Ready":"True"
	I0331 18:04:54.218358   32536 pod_ready.go:81] duration metric: took 397.210316ms waiting for pod "coredns-787d4945fb-hcrtc" in "kube-system" namespace to be "Ready" ...
	I0331 18:04:54.218367   32536 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-939189" in "kube-system" namespace to be "Ready" ...
	I0331 18:04:54.618607   32536 pod_ready.go:92] pod "etcd-pause-939189" in "kube-system" namespace has status "Ready":"True"
	I0331 18:04:54.618631   32536 pod_ready.go:81] duration metric: took 400.255347ms waiting for pod "etcd-pause-939189" in "kube-system" namespace to be "Ready" ...
	I0331 18:04:54.618640   32536 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-939189" in "kube-system" namespace to be "Ready" ...
	I0331 18:04:55.019356   32536 pod_ready.go:92] pod "kube-apiserver-pause-939189" in "kube-system" namespace has status "Ready":"True"
	I0331 18:04:55.019378   32536 pod_ready.go:81] duration metric: took 400.731414ms waiting for pod "kube-apiserver-pause-939189" in "kube-system" namespace to be "Ready" ...
	I0331 18:04:55.019393   32536 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-939189" in "kube-system" namespace to be "Ready" ...
	I0331 18:04:55.420085   32536 pod_ready.go:92] pod "kube-controller-manager-pause-939189" in "kube-system" namespace has status "Ready":"True"
	I0331 18:04:55.420114   32536 pod_ready.go:81] duration metric: took 400.711919ms waiting for pod "kube-controller-manager-pause-939189" in "kube-system" namespace to be "Ready" ...
	I0331 18:04:55.420130   32536 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jg8p6" in "kube-system" namespace to be "Ready" ...
	I0331 18:04:55.015443   33276 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.381792307s)
	I0331 18:04:55.015525   33276 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0331 18:04:55.133415   33276 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0331 18:04:55.243506   33276 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0331 18:04:55.356452   33276 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0331 18:04:55.477055   33276 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0331 18:04:55.493533   33276 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0331 18:04:55.611643   33276 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0331 18:04:55.707141   33276 start.go:528] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0331 18:04:55.707200   33276 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0331 18:04:55.713403   33276 start.go:549] Will wait 60s for crictl version
	I0331 18:04:55.713474   33276 ssh_runner.go:195] Run: which crictl
	I0331 18:04:55.718338   33276 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0331 18:04:55.774128   33276 start.go:565] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.23
	RuntimeApiVersion:  v1alpha2
	I0331 18:04:55.774203   33276 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0331 18:04:55.810277   33276 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0331 18:04:55.819685   32536 pod_ready.go:92] pod "kube-proxy-jg8p6" in "kube-system" namespace has status "Ready":"True"
	I0331 18:04:55.819705   32536 pod_ready.go:81] duration metric: took 399.567435ms waiting for pod "kube-proxy-jg8p6" in "kube-system" namespace to be "Ready" ...
	I0331 18:04:55.819719   32536 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-939189" in "kube-system" namespace to be "Ready" ...
	I0331 18:04:56.219488   32536 pod_ready.go:92] pod "kube-scheduler-pause-939189" in "kube-system" namespace has status "Ready":"True"
	I0331 18:04:56.219513   32536 pod_ready.go:81] duration metric: took 399.783789ms waiting for pod "kube-scheduler-pause-939189" in "kube-system" namespace to be "Ready" ...
	I0331 18:04:56.219524   32536 pod_ready.go:38] duration metric: took 2.582225755s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0331 18:04:56.219550   32536 api_server.go:51] waiting for apiserver process to appear ...
	I0331 18:04:56.219595   32536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 18:04:56.240919   32536 api_server.go:71] duration metric: took 2.768070005s to wait for apiserver process to appear ...
	I0331 18:04:56.240947   32536 api_server.go:87] waiting for apiserver healthz status ...
	I0331 18:04:56.240961   32536 api_server.go:252] Checking apiserver healthz at https://192.168.39.142:8443/healthz ...
	I0331 18:04:56.247401   32536 api_server.go:278] https://192.168.39.142:8443/healthz returned 200:
	ok
	I0331 18:04:56.248689   32536 api_server.go:140] control plane version: v1.26.3
	I0331 18:04:56.248709   32536 api_server.go:130] duration metric: took 7.754551ms to wait for apiserver health ...
	I0331 18:04:56.248718   32536 system_pods.go:43] waiting for kube-system pods to appear ...
	I0331 18:04:56.422125   32536 system_pods.go:59] 6 kube-system pods found
	I0331 18:04:56.422151   32536 system_pods.go:61] "coredns-787d4945fb-hcrtc" [1e78e1f9-1a39-4c02-a4e9-51e5b268d077] Running
	I0331 18:04:56.422159   32536 system_pods.go:61] "etcd-pause-939189" [cdc68c44-f3a4-4655-9818-48f074e8e376] Running
	I0331 18:04:56.422166   32536 system_pods.go:61] "kube-apiserver-pause-939189" [c40b018d-97b2-4cdf-9edc-e1473d304c55] Running
	I0331 18:04:56.422174   32536 system_pods.go:61] "kube-controller-manager-pause-939189" [69a62fcf-5db8-4354-aa08-ee5d2209a0ed] Running
	I0331 18:04:56.422181   32536 system_pods.go:61] "kube-proxy-jg8p6" [dd3378f4-948b-4bec-abd3-ea9dc35d3259] Running
	I0331 18:04:56.422187   32536 system_pods.go:61] "kube-scheduler-pause-939189" [b51eb2f5-8508-46f2-8c02-652ad1a69a1e] Running
	I0331 18:04:56.422193   32536 system_pods.go:74] duration metric: took 173.469145ms to wait for pod list to return data ...
	I0331 18:04:56.422202   32536 default_sa.go:34] waiting for default service account to be created ...
	I0331 18:04:56.618165   32536 default_sa.go:45] found service account: "default"
	I0331 18:04:56.618190   32536 default_sa.go:55] duration metric: took 195.978567ms for default service account to be created ...
	I0331 18:04:56.618200   32536 system_pods.go:116] waiting for k8s-apps to be running ...
	I0331 18:04:56.823045   32536 system_pods.go:86] 6 kube-system pods found
	I0331 18:04:56.823082   32536 system_pods.go:89] "coredns-787d4945fb-hcrtc" [1e78e1f9-1a39-4c02-a4e9-51e5b268d077] Running
	I0331 18:04:56.823092   32536 system_pods.go:89] "etcd-pause-939189" [cdc68c44-f3a4-4655-9818-48f074e8e376] Running
	I0331 18:04:56.823099   32536 system_pods.go:89] "kube-apiserver-pause-939189" [c40b018d-97b2-4cdf-9edc-e1473d304c55] Running
	I0331 18:04:56.823107   32536 system_pods.go:89] "kube-controller-manager-pause-939189" [69a62fcf-5db8-4354-aa08-ee5d2209a0ed] Running
	I0331 18:04:56.823113   32536 system_pods.go:89] "kube-proxy-jg8p6" [dd3378f4-948b-4bec-abd3-ea9dc35d3259] Running
	I0331 18:04:56.823120   32536 system_pods.go:89] "kube-scheduler-pause-939189" [b51eb2f5-8508-46f2-8c02-652ad1a69a1e] Running
	I0331 18:04:56.823129   32536 system_pods.go:126] duration metric: took 204.923041ms to wait for k8s-apps to be running ...
	I0331 18:04:56.823144   32536 system_svc.go:44] waiting for kubelet service to be running ....
	I0331 18:04:56.823194   32536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0331 18:04:56.843108   32536 system_svc.go:56] duration metric: took 19.952106ms WaitForService to wait for kubelet.
	I0331 18:04:56.843157   32536 kubeadm.go:578] duration metric: took 3.370313636s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0331 18:04:56.843181   32536 node_conditions.go:102] verifying NodePressure condition ...
	I0331 18:04:57.019150   32536 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0331 18:04:57.019178   32536 node_conditions.go:123] node cpu capacity is 2
	I0331 18:04:57.019188   32536 node_conditions.go:105] duration metric: took 176.00176ms to run NodePressure ...
	I0331 18:04:57.019201   32536 start.go:228] waiting for startup goroutines ...
	I0331 18:04:57.019209   32536 start.go:233] waiting for cluster config update ...
	I0331 18:04:57.019219   32536 start.go:242] writing updated cluster config ...
	I0331 18:04:57.019587   32536 ssh_runner.go:195] Run: rm -f paused
	I0331 18:04:57.094738   32536 start.go:557] kubectl: 1.26.3, cluster: 1.26.3 (minor skew: 0)
	I0331 18:04:57.097707   32536 out.go:177] * Done! kubectl is now configured to use "pause-939189" cluster and "default" namespace by default
	I0331 18:04:52.670594   33820 preload.go:132] Checking if preload exists for k8s version v0.0.0 and runtime docker
	W0331 18:04:52.706864   33820 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v0.0.0/preloaded-images-k8s-v18-v0.0.0-docker-overlay2-amd64.tar.lz4 status code: 404
	I0331 18:04:52.707029   33820 profile.go:148] Saving config to /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/NoKubernetes-746317/config.json ...
	I0331 18:04:52.707063   33820 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/NoKubernetes-746317/config.json: {Name:mkc819cfb6c45ebbebd0d82f4a0be54fd6cd98e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0331 18:04:52.707228   33820 cache.go:193] Successfully downloaded all kic artifacts
	I0331 18:04:52.707251   33820 start.go:364] acquiring machines lock for NoKubernetes-746317: {Name:mkfdc5208de17d93700ea90324b4f36214eab469 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0331 18:04:55.847800   33276 out.go:204] * Preparing Kubernetes v1.26.3 on Docker 20.10.23 ...
	I0331 18:04:55.847864   33276 main.go:141] libmachine: (auto-347180) Calling .GetIP
	I0331 18:04:55.850787   33276 main.go:141] libmachine: (auto-347180) DBG | domain auto-347180 has defined MAC address 52:54:00:61:01:e7 in network mk-auto-347180
	I0331 18:04:55.851207   33276 main.go:141] libmachine: (auto-347180) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:01:e7", ip: ""} in network mk-auto-347180: {Iface:virbr3 ExpiryTime:2023-03-31 19:04:35 +0000 UTC Type:0 Mac:52:54:00:61:01:e7 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:auto-347180 Clientid:01:52:54:00:61:01:e7}
	I0331 18:04:55.851239   33276 main.go:141] libmachine: (auto-347180) DBG | domain auto-347180 has defined IP address 192.168.72.199 and MAC address 52:54:00:61:01:e7 in network mk-auto-347180
	I0331 18:04:55.851415   33276 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0331 18:04:55.855857   33276 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0331 18:04:55.868328   33276 localpath.go:92] copying /home/jenkins/minikube-integration/16144-3494/.minikube/client.crt -> /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/client.crt
	I0331 18:04:55.868487   33276 localpath.go:117] copying /home/jenkins/minikube-integration/16144-3494/.minikube/client.key -> /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/client.key
	I0331 18:04:55.868617   33276 preload.go:132] Checking if preload exists for k8s version v1.26.3 and runtime docker
	I0331 18:04:55.868673   33276 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0331 18:04:55.896702   33276 docker.go:639] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.3
	registry.k8s.io/kube-scheduler:v1.26.3
	registry.k8s.io/kube-controller-manager:v1.26.3
	registry.k8s.io/kube-proxy:v1.26.3
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0331 18:04:55.896733   33276 docker.go:569] Images already preloaded, skipping extraction
	I0331 18:04:55.896797   33276 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0331 18:04:55.924955   33276 docker.go:639] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.3
	registry.k8s.io/kube-controller-manager:v1.26.3
	registry.k8s.io/kube-scheduler:v1.26.3
	registry.k8s.io/kube-proxy:v1.26.3
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0331 18:04:55.924992   33276 cache_images.go:84] Images are preloaded, skipping loading
	I0331 18:04:55.925053   33276 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0331 18:04:55.965144   33276 cni.go:84] Creating CNI manager for ""
	I0331 18:04:55.965172   33276 cni.go:157] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0331 18:04:55.965185   33276 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0331 18:04:55.965205   33276 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.199 APIServerPort:8443 KubernetesVersion:v1.26.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-347180 NodeName:auto-347180 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.199"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.199 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0331 18:04:55.965393   33276 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.199
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "auto-347180"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.199
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.199"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0331 18:04:55.965514   33276 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=auto-347180 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.199
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.3 ClusterName:auto-347180 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0331 18:04:55.965613   33276 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.3
	I0331 18:04:55.975410   33276 binaries.go:44] Found k8s binaries, skipping transfer
	I0331 18:04:55.975480   33276 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0331 18:04:55.984755   33276 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I0331 18:04:56.009787   33276 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0331 18:04:56.031312   33276 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2093 bytes)
	I0331 18:04:56.049714   33276 ssh_runner.go:195] Run: grep 192.168.72.199	control-plane.minikube.internal$ /etc/hosts
	I0331 18:04:56.054641   33276 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.199	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0331 18:04:56.067876   33276 certs.go:56] Setting up /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180 for IP: 192.168.72.199
	I0331 18:04:56.067912   33276 certs.go:186] acquiring lock for shared ca certs: {Name:mk5b2b979756b4a682c5be81dc53f006bb9a7a01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0331 18:04:56.068110   33276 certs.go:195] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16144-3494/.minikube/ca.key
	I0331 18:04:56.068167   33276 certs.go:195] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16144-3494/.minikube/proxy-client-ca.key
	I0331 18:04:56.068278   33276 certs.go:311] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/client.key
	I0331 18:04:56.068308   33276 certs.go:315] generating minikube signed cert: /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/apiserver.key.217b3e23
	I0331 18:04:56.068325   33276 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/apiserver.crt.217b3e23 with IP's: [192.168.72.199 10.96.0.1 127.0.0.1 10.0.0.1]
	I0331 18:04:56.209196   33276 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/apiserver.crt.217b3e23 ...
	I0331 18:04:56.209224   33276 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/apiserver.crt.217b3e23: {Name:mk3e4cd47c6706ab2f578dfdd08d80ebdd3c15fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0331 18:04:56.209429   33276 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/apiserver.key.217b3e23 ...
	I0331 18:04:56.209445   33276 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/apiserver.key.217b3e23: {Name:mk009817638857b2bbdb66530e778b671a0003f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0331 18:04:56.209547   33276 certs.go:333] copying /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/apiserver.crt.217b3e23 -> /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/apiserver.crt
	I0331 18:04:56.209609   33276 certs.go:337] copying /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/apiserver.key.217b3e23 -> /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/apiserver.key
	I0331 18:04:56.209656   33276 certs.go:315] generating aggregator signed cert: /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/proxy-client.key
	I0331 18:04:56.209668   33276 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/proxy-client.crt with IP's: []
	I0331 18:04:56.257382   33276 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/proxy-client.crt ...
	I0331 18:04:56.257405   33276 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/proxy-client.crt: {Name:mk082703dadea0ea3251f4202bbf72399caa3a43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0331 18:04:56.257583   33276 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/proxy-client.key ...
	I0331 18:04:56.257595   33276 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/proxy-client.key: {Name:mk4b72bffb94c8b27e86fc5f7b2d38af391fe2ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0331 18:04:56.257819   33276 certs.go:401] found cert: /home/jenkins/minikube-integration/16144-3494/.minikube/certs/home/jenkins/minikube-integration/16144-3494/.minikube/certs/10540.pem (1338 bytes)
	W0331 18:04:56.257876   33276 certs.go:397] ignoring /home/jenkins/minikube-integration/16144-3494/.minikube/certs/home/jenkins/minikube-integration/16144-3494/.minikube/certs/10540_empty.pem, impossibly tiny 0 bytes
	I0331 18:04:56.257892   33276 certs.go:401] found cert: /home/jenkins/minikube-integration/16144-3494/.minikube/certs/home/jenkins/minikube-integration/16144-3494/.minikube/certs/ca-key.pem (1675 bytes)
	I0331 18:04:56.257924   33276 certs.go:401] found cert: /home/jenkins/minikube-integration/16144-3494/.minikube/certs/home/jenkins/minikube-integration/16144-3494/.minikube/certs/ca.pem (1078 bytes)
	I0331 18:04:56.257959   33276 certs.go:401] found cert: /home/jenkins/minikube-integration/16144-3494/.minikube/certs/home/jenkins/minikube-integration/16144-3494/.minikube/certs/cert.pem (1123 bytes)
	I0331 18:04:56.257987   33276 certs.go:401] found cert: /home/jenkins/minikube-integration/16144-3494/.minikube/certs/home/jenkins/minikube-integration/16144-3494/.minikube/certs/key.pem (1679 bytes)
	I0331 18:04:56.258026   33276 certs.go:401] found cert: /home/jenkins/minikube-integration/16144-3494/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16144-3494/.minikube/files/etc/ssl/certs/105402.pem (1708 bytes)
	I0331 18:04:56.258526   33276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0331 18:04:56.287806   33276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0331 18:04:56.314968   33276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0331 18:04:56.338082   33276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0331 18:04:56.360708   33276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16144-3494/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0331 18:04:56.390138   33276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16144-3494/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0331 18:04:56.419129   33276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16144-3494/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0331 18:04:56.447101   33276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16144-3494/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0331 18:04:56.472169   33276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16144-3494/.minikube/files/etc/ssl/certs/105402.pem --> /usr/share/ca-certificates/105402.pem (1708 bytes)
	I0331 18:04:56.498664   33276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16144-3494/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0331 18:04:56.525516   33276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16144-3494/.minikube/certs/10540.pem --> /usr/share/ca-certificates/10540.pem (1338 bytes)
	I0331 18:04:56.548806   33276 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0331 18:04:56.565642   33276 ssh_runner.go:195] Run: openssl version
	I0331 18:04:56.571067   33276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/105402.pem && ln -fs /usr/share/ca-certificates/105402.pem /etc/ssl/certs/105402.pem"
	I0331 18:04:56.580624   33276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/105402.pem
	I0331 18:04:56.585385   33276 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Mar 31 17:26 /usr/share/ca-certificates/105402.pem
	I0331 18:04:56.585449   33276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/105402.pem
	I0331 18:04:56.591662   33276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/105402.pem /etc/ssl/certs/3ec20f2e.0"
	I0331 18:04:56.602558   33276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0331 18:04:56.612933   33276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0331 18:04:56.619029   33276 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Mar 31 17:21 /usr/share/ca-certificates/minikubeCA.pem
	I0331 18:04:56.619087   33276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0331 18:04:56.626198   33276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0331 18:04:56.639266   33276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10540.pem && ln -fs /usr/share/ca-certificates/10540.pem /etc/ssl/certs/10540.pem"
	I0331 18:04:56.649914   33276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10540.pem
	I0331 18:04:56.654454   33276 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Mar 31 17:26 /usr/share/ca-certificates/10540.pem
	I0331 18:04:56.654515   33276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10540.pem
	I0331 18:04:56.661570   33276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10540.pem /etc/ssl/certs/51391683.0"
	I0331 18:04:56.671169   33276 kubeadm.go:401] StartCluster: {Name:auto-347180 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16191/minikube-v1.29.0-1680115329-16191-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.3 ClusterName:auto-347
180 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.199 Port:8443 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0331 18:04:56.671303   33276 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0331 18:04:56.695923   33276 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0331 18:04:56.705641   33276 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0331 18:04:56.715247   33276 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0331 18:04:56.724602   33276 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0331 18:04:56.724655   33276 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0331 18:04:56.783971   33276 kubeadm.go:322] [init] Using Kubernetes version: v1.26.3
	I0331 18:04:56.784098   33276 kubeadm.go:322] [preflight] Running pre-flight checks
	I0331 18:04:56.929895   33276 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0331 18:04:56.930047   33276 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0331 18:04:56.930171   33276 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0331 18:04:57.156879   33276 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0331 18:04:54.390483   33494 main.go:141] libmachine: (gvisor-836132) DBG | domain gvisor-836132 has defined MAC address 52:54:00:99:c5:e3 in network mk-gvisor-836132
	I0331 18:04:54.390970   33494 main.go:141] libmachine: (gvisor-836132) DBG | unable to find current IP address of domain gvisor-836132 in network mk-gvisor-836132
	I0331 18:04:54.390985   33494 main.go:141] libmachine: (gvisor-836132) DBG | I0331 18:04:54.390921   33603 retry.go:31] will retry after 1.935547s: waiting for machine to come up
	I0331 18:04:56.329196   33494 main.go:141] libmachine: (gvisor-836132) DBG | domain gvisor-836132 has defined MAC address 52:54:00:99:c5:e3 in network mk-gvisor-836132
	I0331 18:04:56.329773   33494 main.go:141] libmachine: (gvisor-836132) DBG | unable to find current IP address of domain gvisor-836132 in network mk-gvisor-836132
	I0331 18:04:56.329792   33494 main.go:141] libmachine: (gvisor-836132) DBG | I0331 18:04:56.329712   33603 retry.go:31] will retry after 2.673868459s: waiting for machine to come up
	I0331 18:04:57.159756   33276 out.go:204]   - Generating certificates and keys ...
	I0331 18:04:57.159894   33276 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0331 18:04:57.159974   33276 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0331 18:04:57.249986   33276 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0331 18:04:57.520865   33276 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0331 18:04:58.125540   33276 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0331 18:04:58.484579   33276 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0331 18:04:58.862388   33276 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0331 18:04:58.862887   33276 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [auto-347180 localhost] and IPs [192.168.72.199 127.0.0.1 ::1]
	
	* 
	* ==> Docker <==
	* -- Journal begins at Fri 2023-03-31 18:02:19 UTC, ends at Fri 2023-03-31 18:04:59 UTC. --
	Mar 31 18:04:32 pause-939189 dockerd[4567]: time="2023-03-31T18:04:32.708286788Z" level=warning msg="cleaning up after shim disconnected" id=b400c024f135f7c82274f810b9ce06d15d41eb95e87b7caae02c5db9542e56db namespace=moby
	Mar 31 18:04:32 pause-939189 dockerd[4567]: time="2023-03-31T18:04:32.708340669Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 31 18:04:32 pause-939189 cri-dockerd[5345]: W0331 18:04:32.836659    5345 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	Mar 31 18:04:35 pause-939189 dockerd[4567]: time="2023-03-31T18:04:35.348379648Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 31 18:04:35 pause-939189 dockerd[4567]: time="2023-03-31T18:04:35.348500345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 31 18:04:35 pause-939189 dockerd[4567]: time="2023-03-31T18:04:35.348521902Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 31 18:04:35 pause-939189 dockerd[4567]: time="2023-03-31T18:04:35.348533652Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 31 18:04:35 pause-939189 dockerd[4567]: time="2023-03-31T18:04:35.357176945Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 31 18:04:35 pause-939189 dockerd[4567]: time="2023-03-31T18:04:35.357265075Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 31 18:04:35 pause-939189 dockerd[4567]: time="2023-03-31T18:04:35.357291341Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 31 18:04:35 pause-939189 dockerd[4567]: time="2023-03-31T18:04:35.357305204Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 31 18:04:39 pause-939189 cri-dockerd[5345]: time="2023-03-31T18:04:39Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Mar 31 18:04:40 pause-939189 dockerd[4567]: time="2023-03-31T18:04:40.947465780Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 31 18:04:40 pause-939189 dockerd[4567]: time="2023-03-31T18:04:40.947526265Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 31 18:04:40 pause-939189 dockerd[4567]: time="2023-03-31T18:04:40.947543565Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 31 18:04:40 pause-939189 dockerd[4567]: time="2023-03-31T18:04:40.947555826Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 31 18:04:40 pause-939189 dockerd[4567]: time="2023-03-31T18:04:40.953976070Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 31 18:04:40 pause-939189 dockerd[4567]: time="2023-03-31T18:04:40.954296632Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 31 18:04:40 pause-939189 dockerd[4567]: time="2023-03-31T18:04:40.954453909Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 31 18:04:40 pause-939189 dockerd[4567]: time="2023-03-31T18:04:40.954623054Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 31 18:04:41 pause-939189 cri-dockerd[5345]: time="2023-03-31T18:04:41Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/11bb612576207ce6f9fdbde8dfa7f6235a96c8d3be559f2e51d8d4b173aa4b51/resolv.conf as [nameserver 192.168.122.1]"
	Mar 31 18:04:41 pause-939189 dockerd[4567]: time="2023-03-31T18:04:41.977346347Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 31 18:04:41 pause-939189 dockerd[4567]: time="2023-03-31T18:04:41.977635522Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 31 18:04:41 pause-939189 dockerd[4567]: time="2023-03-31T18:04:41.977752683Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 31 18:04:41 pause-939189 dockerd[4567]: time="2023-03-31T18:04:41.977778301Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	1344b5c000a9d       5185b96f0becf       18 seconds ago       Running             coredns                   2                   11bb612576207
	1686d0df28f10       92ed2bec97a63       19 seconds ago       Running             kube-proxy                3                   18b52638ab7a1
	5d40b2ef4a864       5a79047369329       24 seconds ago       Running             kube-scheduler            3                   df301869b351d
	80b600760e999       fce326961ae2d       24 seconds ago       Running             etcd                      3                   1089f600d6711
	84de5d76d35ca       ce8c2293ef09c       28 seconds ago       Running             kube-controller-manager   2                   55c3c7ee9ca0a
	966b1cd3b351e       1d9b3cbae03ce       30 seconds ago       Running             kube-apiserver            2                   0afb944a4f151
	a0ad0a35a3e08       fce326961ae2d       45 seconds ago       Exited              etcd                      2                   c447bce0c8aef
	b4599f5bff86d       5a79047369329       45 seconds ago       Exited              kube-scheduler            2                   6981b4d73a6c9
	9999f58d27656       92ed2bec97a63       47 seconds ago       Exited              kube-proxy                2                   f5b35d44675c8
	b400c024f135f       5185b96f0becf       About a minute ago   Exited              coredns                   1                   5e8b08d2a8f2f
	874fcc56f9f62       1d9b3cbae03ce       About a minute ago   Exited              kube-apiserver            1                   4045aa0f265a1
	8ace7d6c4bee4       ce8c2293ef09c       About a minute ago   Exited              kube-controller-manager   1                   b034146fe7e8c
	
	* 
	* ==> coredns [1344b5c000a9] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] 127.0.0.1:58096 - 62967 "HINFO IN 3459962459257687508.4367275231804161359. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020935271s
	
	* 
	* ==> coredns [b400c024f135] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] 127.0.0.1:42721 - 9088 "HINFO IN 8560628874867663181.8710474958470687856. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.051252273s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> describe nodes <==
	* Name:               pause-939189
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-939189
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=945b3fc45ee9ac8e1ceaffb00a71ec22c717b10e
	                    minikube.k8s.io/name=pause-939189
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_03_31T18_03_00_0700
	                    minikube.k8s.io/version=v1.29.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 31 Mar 2023 18:02:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-939189
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 31 Mar 2023 18:04:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 31 Mar 2023 18:04:39 +0000   Fri, 31 Mar 2023 18:02:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 31 Mar 2023 18:04:39 +0000   Fri, 31 Mar 2023 18:02:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 31 Mar 2023 18:04:39 +0000   Fri, 31 Mar 2023 18:02:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 31 Mar 2023 18:04:39 +0000   Fri, 31 Mar 2023 18:03:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.142
	  Hostname:    pause-939189
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	System Info:
	  Machine ID:                 ff362cba6608463787695edbccc756af
	  System UUID:                ff362cba-6608-4637-8769-5edbccc756af
	  Boot ID:                    8edfbfeb-24ea-46a9-b4c5-e31dc2d1b4c1
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.23
	  Kubelet Version:            v1.26.3
	  Kube-Proxy Version:         v1.26.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-787d4945fb-hcrtc                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     108s
	  kube-system                 etcd-pause-939189                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         2m
	  kube-system                 kube-apiserver-pause-939189             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m3s
	  kube-system                 kube-controller-manager-pause-939189    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m
	  kube-system                 kube-proxy-jg8p6                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         108s
	  kube-system                 kube-scheduler-pause-939189             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 104s                   kube-proxy       
	  Normal  Starting                 18s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  2m10s (x4 over 2m10s)  kubelet          Node pause-939189 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m10s (x4 over 2m10s)  kubelet          Node pause-939189 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m10s (x4 over 2m10s)  kubelet          Node pause-939189 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     2m                     kubelet          Node pause-939189 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  2m                     kubelet          Node pause-939189 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m                     kubelet          Node pause-939189 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  2m                     kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m                     kubelet          Node pause-939189 status is now: NodeReady
	  Normal  Starting                 2m                     kubelet          Starting kubelet.
	  Normal  RegisteredNode           109s                   node-controller  Node pause-939189 event: Registered Node pause-939189 in Controller
	  Normal  Starting                 26s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  26s (x8 over 26s)      kubelet          Node pause-939189 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    26s (x8 over 26s)      kubelet          Node pause-939189 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     26s (x7 over 26s)      kubelet          Node pause-939189 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  26s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           8s                     node-controller  Node pause-939189 event: Registered Node pause-939189 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.422579] systemd-fstab-generator[930]: Ignoring "noauto" for root device
	[  +0.164482] systemd-fstab-generator[941]: Ignoring "noauto" for root device
	[  +0.161981] systemd-fstab-generator[954]: Ignoring "noauto" for root device
	[  +1.600832] systemd-fstab-generator[1102]: Ignoring "noauto" for root device
	[  +0.111337] systemd-fstab-generator[1113]: Ignoring "noauto" for root device
	[  +0.130984] systemd-fstab-generator[1124]: Ignoring "noauto" for root device
	[  +0.124503] systemd-fstab-generator[1135]: Ignoring "noauto" for root device
	[  +0.132321] systemd-fstab-generator[1149]: Ignoring "noauto" for root device
	[  +4.351511] systemd-fstab-generator[1397]: Ignoring "noauto" for root device
	[  +0.702241] kauditd_printk_skb: 68 callbacks suppressed
	[  +9.105596] systemd-fstab-generator[2340]: Ignoring "noauto" for root device
	[Mar31 18:03] kauditd_printk_skb: 8 callbacks suppressed
	[  +5.099775] kauditd_printk_skb: 28 callbacks suppressed
	[ +22.013414] systemd-fstab-generator[3826]: Ignoring "noauto" for root device
	[  +0.416829] systemd-fstab-generator[3860]: Ignoring "noauto" for root device
	[  +0.213956] systemd-fstab-generator[3871]: Ignoring "noauto" for root device
	[  +0.230022] systemd-fstab-generator[3884]: Ignoring "noauto" for root device
	[  +5.258034] kauditd_printk_skb: 4 callbacks suppressed
	[  +6.349775] systemd-fstab-generator[4980]: Ignoring "noauto" for root device
	[  +0.138234] systemd-fstab-generator[4991]: Ignoring "noauto" for root device
	[  +0.169296] systemd-fstab-generator[5007]: Ignoring "noauto" for root device
	[  +0.160988] systemd-fstab-generator[5056]: Ignoring "noauto" for root device
	[  +0.226282] systemd-fstab-generator[5127]: Ignoring "noauto" for root device
	[  +4.119790] kauditd_printk_skb: 37 callbacks suppressed
	[Mar31 18:04] systemd-fstab-generator[7161]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [80b600760e99] <==
	* {"level":"warn","ts":"2023-03-31T18:04:50.753Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-03-31T18:04:50.314Z","time spent":"439.098122ms","remote":"127.0.0.1:52040","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":6620,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-pause-939189\" mod_revision:461 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-pause-939189\" value_size:6558 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-pause-939189\" > >"}
	{"level":"warn","ts":"2023-03-31T18:04:50.754Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"212.221672ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/default\" ","response":"range_response_count:1 size:341"}
	{"level":"info","ts":"2023-03-31T18:04:50.754Z","caller":"traceutil/trace.go:171","msg":"trace[1823280090] range","detail":"{range_begin:/registry/namespaces/default; range_end:; response_count:1; response_revision:462; }","duration":"212.395865ms","start":"2023-03-31T18:04:50.542Z","end":"2023-03-31T18:04:50.754Z","steps":["trace[1823280090] 'agreement among raft nodes before linearized reading'  (duration: 212.138709ms)"],"step_count":1}
	{"level":"warn","ts":"2023-03-31T18:04:50.754Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"341.184734ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-pause-939189\" ","response":"range_response_count:1 size:5480"}
	{"level":"info","ts":"2023-03-31T18:04:50.754Z","caller":"traceutil/trace.go:171","msg":"trace[1705229913] range","detail":"{range_begin:/registry/pods/kube-system/etcd-pause-939189; range_end:; response_count:1; response_revision:462; }","duration":"341.208794ms","start":"2023-03-31T18:04:50.413Z","end":"2023-03-31T18:04:50.754Z","steps":["trace[1705229913] 'agreement among raft nodes before linearized reading'  (duration: 341.128291ms)"],"step_count":1}
	{"level":"warn","ts":"2023-03-31T18:04:50.754Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-03-31T18:04:50.413Z","time spent":"341.245678ms","remote":"127.0.0.1:52040","response type":"/etcdserverpb.KV/Range","request count":0,"request size":46,"response count":1,"response size":5504,"request content":"key:\"/registry/pods/kube-system/etcd-pause-939189\" "}
	{"level":"warn","ts":"2023-03-31T18:04:51.208Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"258.605359ms","expected-duration":"100ms","prefix":"","request":"header:<ID:839788533735404794 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:0ba78738d7beb4f9>","response":"size:41"}
	{"level":"info","ts":"2023-03-31T18:04:51.209Z","caller":"traceutil/trace.go:171","msg":"trace[2128410207] linearizableReadLoop","detail":"{readStateIndex:500; appliedIndex:499; }","duration":"296.499176ms","start":"2023-03-31T18:04:50.912Z","end":"2023-03-31T18:04:51.209Z","steps":["trace[2128410207] 'read index received'  (duration: 37.740315ms)","trace[2128410207] 'applied index is now lower than readState.Index'  (duration: 258.757557ms)"],"step_count":2}
	{"level":"warn","ts":"2023-03-31T18:04:51.209Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"296.647465ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-pause-939189\" ","response":"range_response_count:1 size:5480"}
	{"level":"info","ts":"2023-03-31T18:04:51.209Z","caller":"traceutil/trace.go:171","msg":"trace[478960090] range","detail":"{range_begin:/registry/pods/kube-system/etcd-pause-939189; range_end:; response_count:1; response_revision:462; }","duration":"296.673964ms","start":"2023-03-31T18:04:50.912Z","end":"2023-03-31T18:04:51.209Z","steps":["trace[478960090] 'agreement among raft nodes before linearized reading'  (duration: 296.561324ms)"],"step_count":1}
	{"level":"warn","ts":"2023-03-31T18:04:51.209Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-03-31T18:04:50.762Z","time spent":"447.271669ms","remote":"127.0.0.1:52016","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"info","ts":"2023-03-31T18:04:52.108Z","caller":"traceutil/trace.go:171","msg":"trace[1920228168] linearizableReadLoop","detail":"{readStateIndex:502; appliedIndex:501; }","duration":"165.267816ms","start":"2023-03-31T18:04:51.943Z","end":"2023-03-31T18:04:52.108Z","steps":["trace[1920228168] 'read index received'  (duration: 165.022721ms)","trace[1920228168] 'applied index is now lower than readState.Index'  (duration: 244.277µs)"],"step_count":2}
	{"level":"info","ts":"2023-03-31T18:04:52.110Z","caller":"traceutil/trace.go:171","msg":"trace[1687701317] transaction","detail":"{read_only:false; response_revision:464; number_of_response:1; }","duration":"176.741493ms","start":"2023-03-31T18:04:51.933Z","end":"2023-03-31T18:04:52.110Z","steps":["trace[1687701317] 'process raft request'  (duration: 175.168227ms)"],"step_count":1}
	{"level":"warn","ts":"2023-03-31T18:04:52.112Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"168.992818ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/deployment-controller\" ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2023-03-31T18:04:52.112Z","caller":"traceutil/trace.go:171","msg":"trace[1794617064] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/deployment-controller; range_end:; response_count:1; response_revision:464; }","duration":"169.069396ms","start":"2023-03-31T18:04:51.943Z","end":"2023-03-31T18:04:52.112Z","steps":["trace[1794617064] 'agreement among raft nodes before linearized reading'  (duration: 165.391165ms)"],"step_count":1}
	{"level":"warn","ts":"2023-03-31T18:04:52.293Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"123.74239ms","expected-duration":"100ms","prefix":"","request":"header:<ID:839788533735404827 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/endpointslices/kube-system/kube-dns-f9qtf\" mod_revision:390 > success:<request_put:<key:\"/registry/endpointslices/kube-system/kube-dns-f9qtf\" value_size:1239 >> failure:<request_range:<key:\"/registry/endpointslices/kube-system/kube-dns-f9qtf\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-03-31T18:04:52.294Z","caller":"traceutil/trace.go:171","msg":"trace[280136650] transaction","detail":"{read_only:false; response_revision:465; number_of_response:1; }","duration":"168.841202ms","start":"2023-03-31T18:04:52.125Z","end":"2023-03-31T18:04:52.294Z","steps":["trace[280136650] 'process raft request'  (duration: 44.44482ms)","trace[280136650] 'compare'  (duration: 123.644413ms)"],"step_count":2}
	{"level":"info","ts":"2023-03-31T18:04:52.297Z","caller":"traceutil/trace.go:171","msg":"trace[929692375] transaction","detail":"{read_only:false; response_revision:467; number_of_response:1; }","duration":"142.41231ms","start":"2023-03-31T18:04:52.154Z","end":"2023-03-31T18:04:52.297Z","steps":["trace[929692375] 'process raft request'  (duration: 142.313651ms)"],"step_count":1}
	{"level":"info","ts":"2023-03-31T18:04:52.298Z","caller":"traceutil/trace.go:171","msg":"trace[1640521255] transaction","detail":"{read_only:false; response_revision:466; number_of_response:1; }","duration":"169.933179ms","start":"2023-03-31T18:04:52.128Z","end":"2023-03-31T18:04:52.298Z","steps":["trace[1640521255] 'process raft request'  (duration: 168.949367ms)"],"step_count":1}
	{"level":"info","ts":"2023-03-31T18:04:52.583Z","caller":"traceutil/trace.go:171","msg":"trace[1929288585] linearizableReadLoop","detail":"{readStateIndex:506; appliedIndex:505; }","duration":"170.211991ms","start":"2023-03-31T18:04:52.412Z","end":"2023-03-31T18:04:52.583Z","steps":["trace[1929288585] 'read index received'  (duration: 128.7627ms)","trace[1929288585] 'applied index is now lower than readState.Index'  (duration: 41.448583ms)"],"step_count":2}
	{"level":"info","ts":"2023-03-31T18:04:52.583Z","caller":"traceutil/trace.go:171","msg":"trace[47408908] transaction","detail":"{read_only:false; response_revision:468; number_of_response:1; }","duration":"258.75753ms","start":"2023-03-31T18:04:52.324Z","end":"2023-03-31T18:04:52.583Z","steps":["trace[47408908] 'process raft request'  (duration: 216.820717ms)","trace[47408908] 'compare'  (duration: 41.26405ms)"],"step_count":2}
	{"level":"warn","ts":"2023-03-31T18:04:52.584Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"171.519483ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-pause-939189\" ","response":"range_response_count:1 size:5480"}
	{"level":"info","ts":"2023-03-31T18:04:52.584Z","caller":"traceutil/trace.go:171","msg":"trace[1263506650] range","detail":"{range_begin:/registry/pods/kube-system/etcd-pause-939189; range_end:; response_count:1; response_revision:468; }","duration":"171.595141ms","start":"2023-03-31T18:04:52.412Z","end":"2023-03-31T18:04:52.584Z","steps":["trace[1263506650] 'agreement among raft nodes before linearized reading'  (duration: 171.444814ms)"],"step_count":1}
	{"level":"warn","ts":"2023-03-31T18:04:52.584Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"150.725144ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-03-31T18:04:52.585Z","caller":"traceutil/trace.go:171","msg":"trace[213446996] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:468; }","duration":"150.795214ms","start":"2023-03-31T18:04:52.434Z","end":"2023-03-31T18:04:52.584Z","steps":["trace[213446996] 'agreement among raft nodes before linearized reading'  (duration: 150.635678ms)"],"step_count":1}
	
	* 
	* ==> etcd [a0ad0a35a3e0] <==
	* {"level":"info","ts":"2023-03-31T18:04:14.959Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.39.142:2380"}
	{"level":"info","ts":"2023-03-31T18:04:14.959Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.39.142:2380"}
	{"level":"info","ts":"2023-03-31T18:04:14.959Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-03-31T18:04:14.962Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"d7a5d3e20a6b0ba7","initial-advertise-peer-urls":["https://192.168.39.142:2380"],"listen-peer-urls":["https://192.168.39.142:2380"],"advertise-client-urls":["https://192.168.39.142:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.142:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-03-31T18:04:14.962Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-03-31T18:04:15.334Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d7a5d3e20a6b0ba7 is starting a new election at term 3"}
	{"level":"info","ts":"2023-03-31T18:04:15.334Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d7a5d3e20a6b0ba7 became pre-candidate at term 3"}
	{"level":"info","ts":"2023-03-31T18:04:15.334Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d7a5d3e20a6b0ba7 received MsgPreVoteResp from d7a5d3e20a6b0ba7 at term 3"}
	{"level":"info","ts":"2023-03-31T18:04:15.334Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d7a5d3e20a6b0ba7 became candidate at term 4"}
	{"level":"info","ts":"2023-03-31T18:04:15.334Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d7a5d3e20a6b0ba7 received MsgVoteResp from d7a5d3e20a6b0ba7 at term 4"}
	{"level":"info","ts":"2023-03-31T18:04:15.334Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d7a5d3e20a6b0ba7 became leader at term 4"}
	{"level":"info","ts":"2023-03-31T18:04:15.334Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d7a5d3e20a6b0ba7 elected leader d7a5d3e20a6b0ba7 at term 4"}
	{"level":"info","ts":"2023-03-31T18:04:15.341Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"d7a5d3e20a6b0ba7","local-member-attributes":"{Name:pause-939189 ClientURLs:[https://192.168.39.142:2379]}","request-path":"/0/members/d7a5d3e20a6b0ba7/attributes","cluster-id":"f7d6b5428c0c9dc0","publish-timeout":"7s"}
	{"level":"info","ts":"2023-03-31T18:04:15.341Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-03-31T18:04:15.342Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-03-31T18:04:15.342Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-03-31T18:04:15.343Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.39.142:2379"}
	{"level":"info","ts":"2023-03-31T18:04:15.347Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-03-31T18:04:15.347Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-03-31T18:04:27.719Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-03-31T18:04:27.719Z","caller":"embed/etcd.go:373","msg":"closing etcd server","name":"pause-939189","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.142:2380"],"advertise-client-urls":["https://192.168.39.142:2379"]}
	{"level":"info","ts":"2023-03-31T18:04:27.723Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"d7a5d3e20a6b0ba7","current-leader-member-id":"d7a5d3e20a6b0ba7"}
	{"level":"info","ts":"2023-03-31T18:04:27.727Z","caller":"embed/etcd.go:568","msg":"stopping serving peer traffic","address":"192.168.39.142:2380"}
	{"level":"info","ts":"2023-03-31T18:04:27.728Z","caller":"embed/etcd.go:573","msg":"stopped serving peer traffic","address":"192.168.39.142:2380"}
	{"level":"info","ts":"2023-03-31T18:04:27.728Z","caller":"embed/etcd.go:375","msg":"closed etcd server","name":"pause-939189","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.142:2380"],"advertise-client-urls":["https://192.168.39.142:2379"]}
	
	* 
	* ==> kernel <==
	*  18:05:00 up 2 min,  0 users,  load average: 2.10, 1.02, 0.39
	Linux pause-939189 5.10.57 #1 SMP Wed Mar 29 23:38:32 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [874fcc56f9f6] <==
	* W0331 18:04:09.094355       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0331 18:04:10.570941       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0331 18:04:14.640331       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	E0331 18:04:19.527936       1 run.go:74] "command failed" err="context deadline exceeded"
	
	* 
	* ==> kube-apiserver [966b1cd3b351] <==
	* I0331 18:04:39.222688       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0331 18:04:39.205515       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0331 18:04:39.314255       1 shared_informer.go:280] Caches are synced for cluster_authentication_trust_controller
	I0331 18:04:39.316506       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0331 18:04:39.317062       1 shared_informer.go:280] Caches are synced for configmaps
	I0331 18:04:39.318946       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0331 18:04:39.323304       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0331 18:04:39.338800       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0331 18:04:39.338942       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0331 18:04:39.339358       1 shared_informer.go:280] Caches are synced for crd-autoregister
	I0331 18:04:39.397474       1 shared_informer.go:280] Caches are synced for node_authorizer
	I0331 18:04:39.418720       1 cache.go:39] Caches are synced for autoregister controller
	I0331 18:04:39.958002       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0331 18:04:40.221547       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0331 18:04:41.099152       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0331 18:04:41.124185       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0331 18:04:41.212998       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0331 18:04:41.267710       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0331 18:04:41.286487       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0331 18:04:51.284113       1 trace.go:219] Trace[2025945949]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/192.168.39.142,type:*v1.Endpoints,resource:apiServerIPInfo (31-Mar-2023 18:04:50.760) (total time: 523ms):
	Trace[2025945949]: ---"Transaction prepared" 449ms (18:04:51.210)
	Trace[2025945949]: ---"Txn call completed" 73ms (18:04:51.284)
	Trace[2025945949]: [523.960493ms] [523.960493ms] END
	I0331 18:04:51.929561       1 controller.go:615] quota admission added evaluator for: endpoints
	I0331 18:04:52.124697       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-controller-manager [84de5d76d35c] <==
	* W0331 18:04:52.065251       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="pause-939189" does not exist
	I0331 18:04:52.067639       1 shared_informer.go:280] Caches are synced for resource quota
	I0331 18:04:52.076620       1 shared_informer.go:280] Caches are synced for attach detach
	I0331 18:04:52.084564       1 shared_informer.go:280] Caches are synced for daemon sets
	I0331 18:04:52.087592       1 shared_informer.go:280] Caches are synced for endpoint_slice
	I0331 18:04:52.100706       1 shared_informer.go:280] Caches are synced for node
	I0331 18:04:52.100905       1 range_allocator.go:167] Sending events to api server.
	I0331 18:04:52.101097       1 range_allocator.go:171] Starting range CIDR allocator
	I0331 18:04:52.101132       1 shared_informer.go:273] Waiting for caches to sync for cidrallocator
	I0331 18:04:52.101145       1 shared_informer.go:280] Caches are synced for cidrallocator
	I0331 18:04:52.109512       1 shared_informer.go:280] Caches are synced for GC
	I0331 18:04:52.110949       1 shared_informer.go:280] Caches are synced for endpoint_slice_mirroring
	I0331 18:04:52.111820       1 shared_informer.go:280] Caches are synced for resource quota
	I0331 18:04:52.151113       1 shared_informer.go:280] Caches are synced for taint
	I0331 18:04:52.151644       1 shared_informer.go:280] Caches are synced for TTL
	I0331 18:04:52.151696       1 node_lifecycle_controller.go:1438] Initializing eviction metric for zone: 
	W0331 18:04:52.152283       1 node_lifecycle_controller.go:1053] Missing timestamp for Node pause-939189. Assuming now as a timestamp.
	I0331 18:04:52.152564       1 node_lifecycle_controller.go:1254] Controller detected that zone  is now in state Normal.
	I0331 18:04:52.152806       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I0331 18:04:52.153068       1 taint_manager.go:211] "Sending events to api server"
	I0331 18:04:52.154301       1 event.go:294] "Event occurred" object="pause-939189" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-939189 event: Registered Node pause-939189 in Controller"
	I0331 18:04:52.157444       1 shared_informer.go:280] Caches are synced for persistent volume
	I0331 18:04:52.506059       1 shared_informer.go:280] Caches are synced for garbage collector
	I0331 18:04:52.506479       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0331 18:04:52.533136       1 shared_informer.go:280] Caches are synced for garbage collector
	
	* 
	* ==> kube-controller-manager [8ace7d6c4bee] <==
	* I0331 18:03:59.321744       1 serving.go:348] Generated self-signed cert in-memory
	I0331 18:03:59.853937       1 controllermanager.go:182] Version: v1.26.3
	I0331 18:03:59.853990       1 controllermanager.go:184] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0331 18:03:59.855979       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0331 18:03:59.856127       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0331 18:03:59.856668       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0331 18:03:59.856802       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
	F0331 18:04:20.535428       1 controllermanager.go:228] error building controller context: failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get "https://192.168.39.142:8443/healthz": dial tcp 192.168.39.142:8443: connect: connection refused
	
	* 
	* ==> kube-proxy [1686d0df28f1] <==
	* I0331 18:04:41.170371       1 node.go:163] Successfully retrieved node IP: 192.168.39.142
	I0331 18:04:41.170425       1 server_others.go:109] "Detected node IP" address="192.168.39.142"
	I0331 18:04:41.170450       1 server_others.go:535] "Using iptables proxy"
	I0331 18:04:41.271349       1 server_others.go:170] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0331 18:04:41.271390       1 server_others.go:176] "Using iptables Proxier"
	I0331 18:04:41.271446       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0331 18:04:41.271898       1 server.go:655] "Version info" version="v1.26.3"
	I0331 18:04:41.271978       1 server.go:657] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0331 18:04:41.276289       1 config.go:317] "Starting service config controller"
	I0331 18:04:41.276432       1 shared_informer.go:273] Waiting for caches to sync for service config
	I0331 18:04:41.276461       1 config.go:226] "Starting endpoint slice config controller"
	I0331 18:04:41.276465       1 shared_informer.go:273] Waiting for caches to sync for endpoint slice config
	I0331 18:04:41.277123       1 config.go:444] "Starting node config controller"
	I0331 18:04:41.277131       1 shared_informer.go:273] Waiting for caches to sync for node config
	I0331 18:04:41.376963       1 shared_informer.go:280] Caches are synced for endpoint slice config
	I0331 18:04:41.377002       1 shared_informer.go:280] Caches are synced for service config
	I0331 18:04:41.377248       1 shared_informer.go:280] Caches are synced for node config
	
	* 
	* ==> kube-proxy [9999f58d2765] <==
	* E0331 18:04:20.538153       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-939189": dial tcp 192.168.39.142:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.142:56890->192.168.39.142:8443: read: connection reset by peer
	E0331 18:04:21.665395       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-939189": dial tcp 192.168.39.142:8443: connect: connection refused
	E0331 18:04:23.920058       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-939189": dial tcp 192.168.39.142:8443: connect: connection refused
	
	* 
	* ==> kube-scheduler [5d40b2ef4a86] <==
	* I0331 18:04:36.274158       1 serving.go:348] Generated self-signed cert in-memory
	W0331 18:04:39.233042       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0331 18:04:39.233351       1 authentication.go:349] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0331 18:04:39.233637       1 authentication.go:350] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0331 18:04:39.233672       1 authentication.go:351] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0331 18:04:39.306413       1 server.go:152] "Starting Kubernetes Scheduler" version="v1.26.3"
	I0331 18:04:39.306462       1 server.go:154] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0331 18:04:39.308017       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0331 18:04:39.308563       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0331 18:04:39.308610       1 shared_informer.go:273] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0331 18:04:39.308627       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0331 18:04:39.409801       1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [b4599f5bff86] <==
	* E0331 18:04:24.036070       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.142:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.142:8443: connect: connection refused
	W0331 18:04:24.450493       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: Get "https://192.168.39.142:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.142:8443: connect: connection refused
	E0331 18:04:24.450560       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.142:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.142:8443: connect: connection refused
	W0331 18:04:24.681951       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.142:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.142:8443: connect: connection refused
	E0331 18:04:24.682039       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.142:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.142:8443: connect: connection refused
	W0331 18:04:24.877656       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://192.168.39.142:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.142:8443: connect: connection refused
	E0331 18:04:24.878016       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.142:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.142:8443: connect: connection refused
	W0331 18:04:24.900986       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://192.168.39.142:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.142:8443: connect: connection refused
	E0331 18:04:24.901338       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.142:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.142:8443: connect: connection refused
	W0331 18:04:24.987726       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: Get "https://192.168.39.142:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.142:8443: connect: connection refused
	E0331 18:04:24.988045       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.142:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.142:8443: connect: connection refused
	W0331 18:04:25.024394       1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get "https://192.168.39.142:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.142:8443: connect: connection refused
	E0331 18:04:25.024478       1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.142:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.142:8443: connect: connection refused
	W0331 18:04:25.132338       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: Get "https://192.168.39.142:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.142:8443: connect: connection refused
	E0331 18:04:25.132589       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.142:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.142:8443: connect: connection refused
	W0331 18:04:26.745186       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: Get "https://192.168.39.142:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.142:8443: connect: connection refused
	E0331 18:04:26.745273       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.142:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.142:8443: connect: connection refused
	W0331 18:04:26.909186       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.142:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.142:8443: connect: connection refused
	E0331 18:04:26.909259       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.142:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.142:8443: connect: connection refused
	W0331 18:04:27.588118       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: Get "https://192.168.39.142:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.142:8443: connect: connection refused
	E0331 18:04:27.588180       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.142:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.142:8443: connect: connection refused
	I0331 18:04:27.668688       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0331 18:04:27.668780       1 shared_informer.go:276] unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0331 18:04:27.668791       1 configmap_cafile_content.go:210] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0331 18:04:27.669106       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Fri 2023-03-31 18:02:19 UTC, ends at Fri 2023-03-31 18:05:00 UTC. --
	Mar 31 18:04:35 pause-939189 kubelet[7167]: I0331 18:04:35.060753    7167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbeb3c050b5e21453f641a818794f61-kubeconfig\") pod \"kube-controller-manager-pause-939189\" (UID: \"5bbeb3c050b5e21453f641a818794f61\") " pod="kube-system/kube-controller-manager-pause-939189"
	Mar 31 18:04:35 pause-939189 kubelet[7167]: I0331 18:04:35.060806    7167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbeb3c050b5e21453f641a818794f61-usr-share-ca-certificates\") pod \"kube-controller-manager-pause-939189\" (UID: \"5bbeb3c050b5e21453f641a818794f61\") " pod="kube-system/kube-controller-manager-pause-939189"
	Mar 31 18:04:35 pause-939189 kubelet[7167]: I0331 18:04:35.173548    7167 scope.go:115] "RemoveContainer" containerID="a0ad0a35a3e08720ef402cc44066aa6415d3380188ccf061278936b018f9164f"
	Mar 31 18:04:35 pause-939189 kubelet[7167]: I0331 18:04:35.206303    7167 scope.go:115] "RemoveContainer" containerID="b4599f5bff86da254627b8fa420dbfa886e737fe4bf8140cd8ac5ec3f882a89e"
	Mar 31 18:04:35 pause-939189 kubelet[7167]: I0331 18:04:35.871491    7167 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5b35d44675c82be44631616cd6f0a52aa1dc911e88776342deacc611d359e35"
	Mar 31 18:04:39 pause-939189 kubelet[7167]: I0331 18:04:39.403200    7167 kubelet_node_status.go:108] "Node was previously registered" node="pause-939189"
	Mar 31 18:04:39 pause-939189 kubelet[7167]: I0331 18:04:39.403314    7167 kubelet_node_status.go:73] "Successfully registered node" node="pause-939189"
	Mar 31 18:04:39 pause-939189 kubelet[7167]: I0331 18:04:39.406119    7167 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Mar 31 18:04:39 pause-939189 kubelet[7167]: I0331 18:04:39.407529    7167 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Mar 31 18:04:39 pause-939189 kubelet[7167]: I0331 18:04:39.534534    7167 apiserver.go:52] "Watching apiserver"
	Mar 31 18:04:39 pause-939189 kubelet[7167]: I0331 18:04:39.537600    7167 topology_manager.go:210] "Topology Admit Handler"
	Mar 31 18:04:39 pause-939189 kubelet[7167]: I0331 18:04:39.537920    7167 topology_manager.go:210] "Topology Admit Handler"
	Mar 31 18:04:39 pause-939189 kubelet[7167]: I0331 18:04:39.561329    7167 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Mar 31 18:04:39 pause-939189 kubelet[7167]: I0331 18:04:39.592448    7167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dd3378f4-948b-4bec-abd3-ea9dc35d3259-xtables-lock\") pod \"kube-proxy-jg8p6\" (UID: \"dd3378f4-948b-4bec-abd3-ea9dc35d3259\") " pod="kube-system/kube-proxy-jg8p6"
	Mar 31 18:04:39 pause-939189 kubelet[7167]: I0331 18:04:39.592793    7167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1e78e1f9-1a39-4c02-a4e9-51e5b268d077-config-volume\") pod \"coredns-787d4945fb-hcrtc\" (UID: \"1e78e1f9-1a39-4c02-a4e9-51e5b268d077\") " pod="kube-system/coredns-787d4945fb-hcrtc"
	Mar 31 18:04:39 pause-939189 kubelet[7167]: I0331 18:04:39.593000    7167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxlhf\" (UniqueName: \"kubernetes.io/projected/dd3378f4-948b-4bec-abd3-ea9dc35d3259-kube-api-access-nxlhf\") pod \"kube-proxy-jg8p6\" (UID: \"dd3378f4-948b-4bec-abd3-ea9dc35d3259\") " pod="kube-system/kube-proxy-jg8p6"
	Mar 31 18:04:39 pause-939189 kubelet[7167]: I0331 18:04:39.593182    7167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dd3378f4-948b-4bec-abd3-ea9dc35d3259-lib-modules\") pod \"kube-proxy-jg8p6\" (UID: \"dd3378f4-948b-4bec-abd3-ea9dc35d3259\") " pod="kube-system/kube-proxy-jg8p6"
	Mar 31 18:04:39 pause-939189 kubelet[7167]: I0331 18:04:39.593344    7167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n26cp\" (UniqueName: \"kubernetes.io/projected/1e78e1f9-1a39-4c02-a4e9-51e5b268d077-kube-api-access-n26cp\") pod \"coredns-787d4945fb-hcrtc\" (UID: \"1e78e1f9-1a39-4c02-a4e9-51e5b268d077\") " pod="kube-system/coredns-787d4945fb-hcrtc"
	Mar 31 18:04:39 pause-939189 kubelet[7167]: I0331 18:04:39.593511    7167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/dd3378f4-948b-4bec-abd3-ea9dc35d3259-kube-proxy\") pod \"kube-proxy-jg8p6\" (UID: \"dd3378f4-948b-4bec-abd3-ea9dc35d3259\") " pod="kube-system/kube-proxy-jg8p6"
	Mar 31 18:04:39 pause-939189 kubelet[7167]: I0331 18:04:39.593631    7167 reconciler.go:41] "Reconciler: start to sync state"
	Mar 31 18:04:40 pause-939189 kubelet[7167]: I0331 18:04:40.739124    7167 scope.go:115] "RemoveContainer" containerID="9999f58d276569aa698d96721d17b94fa850bf4239d5df11ce622ad76d4c9c20"
	Mar 31 18:04:40 pause-939189 kubelet[7167]: I0331 18:04:40.900279    7167 request.go:690] Waited for 1.195299342s due to client-side throttling, not priority and fairness, request: PATCH:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/etcd-pause-939189/status
	Mar 31 18:04:41 pause-939189 kubelet[7167]: I0331 18:04:41.825587    7167 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11bb612576207ce6f9fdbde8dfa7f6235a96c8d3be559f2e51d8d4b173aa4b51"
	Mar 31 18:04:43 pause-939189 kubelet[7167]: I0331 18:04:43.869081    7167 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness"
	Mar 31 18:04:45 pause-939189 kubelet[7167]: I0331 18:04:45.920720    7167 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-939189 -n pause-939189
helpers_test.go:261: (dbg) Run:  kubectl --context pause-939189 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (95.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (38.5s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-746317 --driver=kvm2 
E0331 18:04:06.732625   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/functional-217220/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-746317 --driver=kvm2 : exit status 90 (38.216916312s)

                                                
                                                
-- stdout --
	* [NoKubernetes-746317] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16144
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16144-3494/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16144-3494/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting control plane node NoKubernetes-746317 in cluster NoKubernetes-746317
	* Creating kvm2 VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for cri-docker.service failed because the control process exited with error code.
	See "systemctl status cri-docker.service" and "journalctl -xe" for details.
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-linux-amd64 start -p NoKubernetes-746317 --driver=kvm2 " : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p NoKubernetes-746317 -n NoKubernetes-746317
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p NoKubernetes-746317 -n NoKubernetes-746317: exit status 6 (285.635518ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0331 18:04:29.545422   33410 status.go:415] kubeconfig endpoint: extract IP: "NoKubernetes-746317" does not appear in /home/jenkins/minikube-integration/16144-3494/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "NoKubernetes-746317" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (38.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (22.51s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-746317 --no-kubernetes --driver=kvm2 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-746317 --no-kubernetes --driver=kvm2 : (19.547871173s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-746317 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-746317 status -o json: exit status 6 (249.258451ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-746317","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Misconfigured","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	E0331 18:04:49.348437   33669 status.go:415] kubeconfig endpoint: extract IP: "NoKubernetes-746317" does not appear in /home/jenkins/minikube-integration/16144-3494/kubeconfig

                                                
                                                
** /stderr **
no_kubernetes_test.go:203: failed to run minikube status with json output. args "out/minikube-linux-amd64 -p NoKubernetes-746317 status -o json" : exit status 6
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-746317
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-746317: (2.109842135s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p NoKubernetes-746317 -n NoKubernetes-746317
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p NoKubernetes-746317 -n NoKubernetes-746317: exit status 85 (317.629085ms)

                                                
                                                
-- stdout --
	* Profile "NoKubernetes-746317" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p NoKubernetes-746317"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "NoKubernetes-746317" host is not running, skipping log retrieval (state="* Profile \"NoKubernetes-746317\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p NoKubernetes-746317\"")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p NoKubernetes-746317 -n NoKubernetes-746317
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p NoKubernetes-746317 -n NoKubernetes-746317: exit status 85 (282.260567ms)

                                                
                                                
-- stdout --
	* Profile "NoKubernetes-746317" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p NoKubernetes-746317"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "NoKubernetes-746317" host is not running, skipping log retrieval (state="* Profile \"NoKubernetes-746317\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p NoKubernetes-746317\"")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (22.51s)

                                                
                                    

Test pass (276/312)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 8.46
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.06
10 TestDownloadOnly/v1.26.3/json-events 4.07
11 TestDownloadOnly/v1.26.3/preload-exists 0
15 TestDownloadOnly/v1.26.3/LogsDuration 0.05
17 TestDownloadOnly/v1.27.0-rc.0/json-events 5.22
18 TestDownloadOnly/v1.27.0-rc.0/preload-exists 0
22 TestDownloadOnly/v1.27.0-rc.0/LogsDuration 0.06
23 TestDownloadOnly/DeleteAll 0.38
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.36
26 TestBinaryMirror 0.63
27 TestOffline 123.68
29 TestAddons/Setup 147.47
31 TestAddons/parallel/Registry 17.26
32 TestAddons/parallel/Ingress 24.2
33 TestAddons/parallel/MetricsServer 6.02
34 TestAddons/parallel/HelmTiller 11.69
36 TestAddons/parallel/CSI 70.62
37 TestAddons/parallel/Headlamp 12.66
38 TestAddons/parallel/CloudSpanner 5.7
41 TestAddons/serial/GCPAuth/Namespaces 0.14
42 TestAddons/StoppedEnableDisable 13.4
43 TestCertOptions 91.17
44 TestCertExpiration 276.96
45 TestDockerFlags 132.99
46 TestForceSystemdFlag 79.5
47 TestForceSystemdEnv 65.06
48 TestKVMDriverInstallOrUpdate 3.6
52 TestErrorSpam/setup 52.4
53 TestErrorSpam/start 0.33
54 TestErrorSpam/status 0.71
55 TestErrorSpam/pause 1.17
56 TestErrorSpam/unpause 1.31
57 TestErrorSpam/stop 13.22
60 TestFunctional/serial/CopySyncFile 0
61 TestFunctional/serial/StartWithProxy 70.04
62 TestFunctional/serial/AuditLog 0
63 TestFunctional/serial/SoftStart 42.26
64 TestFunctional/serial/KubeContext 0.04
65 TestFunctional/serial/KubectlGetPods 0.1
68 TestFunctional/serial/CacheCmd/cache/add_remote 2.93
69 TestFunctional/serial/CacheCmd/cache/add_local 1.44
70 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
71 TestFunctional/serial/CacheCmd/cache/list 0.04
72 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.21
73 TestFunctional/serial/CacheCmd/cache/cache_reload 1.39
74 TestFunctional/serial/CacheCmd/cache/delete 0.09
75 TestFunctional/serial/MinikubeKubectlCmd 0.1
76 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
77 TestFunctional/serial/ExtraConfig 49.37
78 TestFunctional/serial/ComponentHealth 0.07
79 TestFunctional/serial/LogsCmd 1.17
80 TestFunctional/serial/LogsFileCmd 1.27
82 TestFunctional/parallel/ConfigCmd 0.33
83 TestFunctional/parallel/DashboardCmd 36.91
84 TestFunctional/parallel/DryRun 0.31
85 TestFunctional/parallel/InternationalLanguage 0.13
86 TestFunctional/parallel/StatusCmd 1.14
90 TestFunctional/parallel/ServiceCmdConnect 13.82
91 TestFunctional/parallel/AddonsCmd 0.18
92 TestFunctional/parallel/PersistentVolumeClaim 57.58
94 TestFunctional/parallel/SSHCmd 0.43
95 TestFunctional/parallel/CpCmd 0.87
96 TestFunctional/parallel/MySQL 35.75
97 TestFunctional/parallel/FileSync 0.22
98 TestFunctional/parallel/CertSync 1.61
102 TestFunctional/parallel/NodeLabels 0.09
104 TestFunctional/parallel/NonActiveRuntimeDisabled 0.23
106 TestFunctional/parallel/License 0.17
107 TestFunctional/parallel/Version/short 0.04
108 TestFunctional/parallel/Version/components 0.71
109 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
110 TestFunctional/parallel/ImageCommands/ImageListTable 0.2
111 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
112 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
113 TestFunctional/parallel/ImageCommands/ImageBuild 4.01
114 TestFunctional/parallel/ImageCommands/Setup 1.26
115 TestFunctional/parallel/ServiceCmd/DeployApp 14.23
125 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.39
126 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.51
127 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.55
128 TestFunctional/parallel/ImageCommands/ImageSaveToFile 2.28
129 TestFunctional/parallel/ServiceCmd/List 0.36
130 TestFunctional/parallel/ServiceCmd/JSONOutput 0.36
131 TestFunctional/parallel/ServiceCmd/HTTPS 0.38
132 TestFunctional/parallel/ServiceCmd/Format 0.44
133 TestFunctional/parallel/ServiceCmd/URL 0.33
134 TestFunctional/parallel/ImageCommands/ImageRemove 0.67
135 TestFunctional/parallel/DockerEnv/bash 1.27
136 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
137 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
138 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
139 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.33
140 TestFunctional/parallel/ProfileCmd/profile_not_create 0.47
141 TestFunctional/parallel/MountCmd/any-port 9.68
142 TestFunctional/parallel/ProfileCmd/profile_list 0.3
143 TestFunctional/parallel/ProfileCmd/profile_json_output 0.37
144 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 3.13
145 TestFunctional/parallel/MountCmd/specific-port 1.84
146 TestFunctional/delete_addon-resizer_images 0.16
147 TestFunctional/delete_my-image_image 0.06
148 TestFunctional/delete_minikube_cached_images 0.06
149 TestGvisorAddon 325.39
152 TestImageBuild/serial/NormalBuild 2.31
153 TestImageBuild/serial/BuildWithBuildArg 1.66
154 TestImageBuild/serial/BuildWithDockerIgnore 0.5
155 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.34
158 TestIngressAddonLegacy/StartLegacyK8sCluster 110.94
160 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 18.38
161 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.5
162 TestIngressAddonLegacy/serial/ValidateIngressAddons 39.8
165 TestJSONOutput/start/Command 110.46
166 TestJSONOutput/start/Audit 0
168 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
169 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
171 TestJSONOutput/pause/Command 0.61
172 TestJSONOutput/pause/Audit 0
174 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
175 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
177 TestJSONOutput/unpause/Command 0.57
178 TestJSONOutput/unpause/Audit 0
180 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
181 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
183 TestJSONOutput/stop/Command 13.1
184 TestJSONOutput/stop/Audit 0
186 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
188 TestErrorJSONOutput 0.43
193 TestMainNoArgs 0.04
194 TestMinikubeProfile 105.05
197 TestMountStart/serial/StartWithMountFirst 33.01
198 TestMountStart/serial/VerifyMountFirst 0.49
199 TestMountStart/serial/StartWithMountSecond 28.18
200 TestMountStart/serial/VerifyMountSecond 0.37
201 TestMountStart/serial/DeleteFirst 0.71
202 TestMountStart/serial/VerifyMountPostDelete 0.39
203 TestMountStart/serial/Stop 2.32
204 TestMountStart/serial/RestartStopped 23.29
205 TestMountStart/serial/VerifyMountPostStop 0.37
208 TestMultiNode/serial/FreshStart2Nodes 167.25
209 TestMultiNode/serial/DeployApp2Nodes 4.99
210 TestMultiNode/serial/PingHostFrom2Pods 0.93
211 TestMultiNode/serial/AddNode 54.75
212 TestMultiNode/serial/ProfileList 0.27
213 TestMultiNode/serial/CopyFile 7.38
214 TestMultiNode/serial/StopNode 3.96
215 TestMultiNode/serial/StartAfterStop 32.61
216 TestMultiNode/serial/RestartKeepsNodes 178.85
217 TestMultiNode/serial/DeleteNode 1.84
218 TestMultiNode/serial/StopMultiNode 26.29
219 TestMultiNode/serial/RestartMultiNode 104.9
220 TestMultiNode/serial/ValidateNameConflict 54.92
225 TestPreload 205.46
227 TestScheduledStopUnix 125.32
228 TestSkaffold 86.84
231 TestRunningBinaryUpgrade 154.09
233 TestKubernetesUpgrade 228.09
246 TestStoppedBinaryUpgrade/Setup 0.27
247 TestStoppedBinaryUpgrade/Upgrade 209.58
249 TestPause/serial/Start 112.79
250 TestStoppedBinaryUpgrade/MinikubeLogs 2.05
260 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
262 TestNetworkPlugins/group/auto/Start 73.91
264 TestNoKubernetes/serial/Start 43.75
265 TestNetworkPlugins/group/kindnet/Start 112.44
266 TestNetworkPlugins/group/auto/KubeletFlags 0.22
267 TestNetworkPlugins/group/auto/NetCatPod 12.42
268 TestNoKubernetes/serial/VerifyK8sNotRunning 0.23
269 TestNoKubernetes/serial/ProfileList 19.88
270 TestNetworkPlugins/group/auto/DNS 0.17
271 TestNetworkPlugins/group/auto/Localhost 0.16
272 TestNetworkPlugins/group/auto/HairPin 0.16
273 TestNoKubernetes/serial/Stop 2.27
274 TestNoKubernetes/serial/StartNoArgs 26.09
275 TestNetworkPlugins/group/calico/Start 129.74
276 TestNetworkPlugins/group/custom-flannel/Start 120.16
277 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.21
278 TestNetworkPlugins/group/false/Start 133.5
279 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
280 TestNetworkPlugins/group/kindnet/KubeletFlags 0.22
281 TestNetworkPlugins/group/kindnet/NetCatPod 12.41
282 TestNetworkPlugins/group/kindnet/DNS 0.2
283 TestNetworkPlugins/group/kindnet/Localhost 0.16
284 TestNetworkPlugins/group/kindnet/HairPin 0.15
285 TestNetworkPlugins/group/enable-default-cni/Start 94.79
286 TestNetworkPlugins/group/calico/ControllerPod 5.04
287 TestNetworkPlugins/group/calico/KubeletFlags 0.27
288 TestNetworkPlugins/group/calico/NetCatPod 15.75
289 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.49
290 TestNetworkPlugins/group/custom-flannel/NetCatPod 16.08
291 TestNetworkPlugins/group/calico/DNS 0.24
292 TestNetworkPlugins/group/calico/Localhost 0.23
293 TestNetworkPlugins/group/calico/HairPin 0.21
294 TestNetworkPlugins/group/custom-flannel/DNS 0.22
295 TestNetworkPlugins/group/custom-flannel/Localhost 0.19
296 TestNetworkPlugins/group/custom-flannel/HairPin 0.19
297 TestNetworkPlugins/group/false/KubeletFlags 0.22
298 TestNetworkPlugins/group/false/NetCatPod 14.51
299 TestNetworkPlugins/group/flannel/Start 86.64
300 TestNetworkPlugins/group/false/DNS 0.21
301 TestNetworkPlugins/group/false/Localhost 0.16
302 TestNetworkPlugins/group/false/HairPin 0.17
303 TestNetworkPlugins/group/bridge/Start 138.77
304 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.23
305 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.35
306 TestNetworkPlugins/group/kubenet/Start 127.31
307 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
308 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
309 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
311 TestStartStop/group/old-k8s-version/serial/FirstStart 181.37
312 TestNetworkPlugins/group/flannel/ControllerPod 5.03
313 TestNetworkPlugins/group/flannel/KubeletFlags 0.25
314 TestNetworkPlugins/group/flannel/NetCatPod 15.51
315 TestNetworkPlugins/group/flannel/DNS 0.2
316 TestNetworkPlugins/group/flannel/Localhost 0.17
317 TestNetworkPlugins/group/flannel/HairPin 0.19
319 TestStartStop/group/no-preload/serial/FirstStart 97.93
320 TestNetworkPlugins/group/bridge/KubeletFlags 0.35
321 TestNetworkPlugins/group/bridge/NetCatPod 14.44
322 TestNetworkPlugins/group/kubenet/KubeletFlags 0.23
323 TestNetworkPlugins/group/kubenet/NetCatPod 12.3
324 TestNetworkPlugins/group/bridge/DNS 0.19
325 TestNetworkPlugins/group/bridge/Localhost 0.16
326 TestNetworkPlugins/group/bridge/HairPin 0.14
327 TestNetworkPlugins/group/kubenet/DNS 0.21
328 TestNetworkPlugins/group/kubenet/Localhost 0.17
329 TestNetworkPlugins/group/kubenet/HairPin 0.17
331 TestStartStop/group/embed-certs/serial/FirstStart 81.87
333 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 106.7
334 TestStartStop/group/no-preload/serial/DeployApp 9.52
335 TestStartStop/group/old-k8s-version/serial/DeployApp 8.53
336 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.55
337 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.1
338 TestStartStop/group/old-k8s-version/serial/Stop 13.42
339 TestStartStop/group/no-preload/serial/Stop 14.17
340 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.26
341 TestStartStop/group/old-k8s-version/serial/SecondStart 448.8
342 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
343 TestStartStop/group/no-preload/serial/SecondStart 336.05
344 TestStartStop/group/embed-certs/serial/DeployApp 10.5
345 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.85
346 TestStartStop/group/embed-certs/serial/Stop 15.15
347 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.22
348 TestStartStop/group/embed-certs/serial/SecondStart 323.86
349 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 12.51
350 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.99
351 TestStartStop/group/default-k8s-diff-port/serial/Stop 13.17
352 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
353 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 314.37
354 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.02
355 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
356 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.3
357 TestStartStop/group/no-preload/serial/Pause 2.96
359 TestStartStop/group/newest-cni/serial/FirstStart 76.65
360 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.02
361 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
362 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
363 TestStartStop/group/embed-certs/serial/Pause 2.69
364 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 5.02
365 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
366 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
367 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.99
368 TestStartStop/group/newest-cni/serial/DeployApp 0
369 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.03
370 TestStartStop/group/newest-cni/serial/Stop 8.11
371 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.16
372 TestStartStop/group/newest-cni/serial/SecondStart 47.23
373 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.02
374 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
375 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
376 TestStartStop/group/old-k8s-version/serial/Pause 2.44
377 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
378 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
379 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
380 TestStartStop/group/newest-cni/serial/Pause 2.26
x
+
TestDownloadOnly/v1.16.0/json-events (8.46s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-397469 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-397469 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=kvm2 : (8.463788513s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (8.46s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-397469
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-397469: exit status 85 (55.868561ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-397469 | jenkins | v1.29.0 | 31 Mar 23 17:20 UTC |          |
	|         | -p download-only-397469        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/03/31 17:20:31
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.20.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0331 17:20:31.063593   10552 out.go:296] Setting OutFile to fd 1 ...
	I0331 17:20:31.063713   10552 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0331 17:20:31.063722   10552 out.go:309] Setting ErrFile to fd 2...
	I0331 17:20:31.063726   10552 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0331 17:20:31.063821   10552 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16144-3494/.minikube/bin
	W0331 17:20:31.063942   10552 root.go:312] Error reading config file at /home/jenkins/minikube-integration/16144-3494/.minikube/config/config.json: open /home/jenkins/minikube-integration/16144-3494/.minikube/config/config.json: no such file or directory
	I0331 17:20:31.064475   10552 out.go:303] Setting JSON to true
	I0331 17:20:31.065264   10552 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":182,"bootTime":1680283049,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1031-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0331 17:20:31.065319   10552 start.go:135] virtualization: kvm guest
	I0331 17:20:31.068104   10552 out.go:97] [download-only-397469] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	W0331 17:20:31.068201   10552 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/16144-3494/.minikube/cache/preloaded-tarball: no such file or directory
	I0331 17:20:31.069790   10552 out.go:169] MINIKUBE_LOCATION=16144
	I0331 17:20:31.068236   10552 notify.go:220] Checking for updates...
	I0331 17:20:31.072797   10552 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0331 17:20:31.074403   10552 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/16144-3494/kubeconfig
	I0331 17:20:31.075874   10552 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/16144-3494/.minikube
	I0331 17:20:31.077212   10552 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0331 17:20:31.079825   10552 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0331 17:20:31.080386   10552 driver.go:365] Setting default libvirt URI to qemu:///system
	I0331 17:20:31.197753   10552 out.go:97] Using the kvm2 driver based on user configuration
	I0331 17:20:31.197792   10552 start.go:295] selected driver: kvm2
	I0331 17:20:31.197800   10552 start.go:859] validating driver "kvm2" against <nil>
	I0331 17:20:31.198192   10552 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0331 17:20:31.198345   10552 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16144-3494/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0331 17:20:31.213616   10552 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.29.0
	I0331 17:20:31.213677   10552 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0331 17:20:31.214127   10552 start_flags.go:386] Using suggested 6000MB memory alloc based on sys=32101MB, container=0MB
	I0331 17:20:31.214280   10552 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0331 17:20:31.214305   10552 cni.go:84] Creating CNI manager for ""
	I0331 17:20:31.214321   10552 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0331 17:20:31.214328   10552 start_flags.go:319] config:
	{Name:download-only-397469 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-397469 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Ne
tworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0331 17:20:31.214518   10552 iso.go:125] acquiring lock: {Name:mk48583bcdf05c8e72651ed56790356a32c028b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0331 17:20:31.216788   10552 out.go:97] Downloading VM boot image ...
	I0331 17:20:31.216830   10552 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/16191/minikube-v1.29.0-1680115329-16191-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/16191/minikube-v1.29.0-1680115329-16191-amd64.iso.sha256 -> /home/jenkins/minikube-integration/16144-3494/.minikube/cache/iso/amd64/minikube-v1.29.0-1680115329-16191-amd64.iso
	I0331 17:20:34.581438   10552 out.go:97] Starting control plane node download-only-397469 in cluster download-only-397469
	I0331 17:20:34.581458   10552 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0331 17:20:34.609140   10552 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0331 17:20:34.609188   10552 cache.go:57] Caching tarball of preloaded images
	I0331 17:20:34.609338   10552 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0331 17:20:34.611831   10552 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0331 17:20:34.611855   10552 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0331 17:20:34.639375   10552 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /home/jenkins/minikube-integration/16144-3494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-397469"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.3/json-events (4.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.3/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-397469 --force --alsologtostderr --kubernetes-version=v1.26.3 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-397469 --force --alsologtostderr --kubernetes-version=v1.26.3 --container-runtime=docker --driver=kvm2 : (4.074326966s)
--- PASS: TestDownloadOnly/v1.26.3/json-events (4.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.3/preload-exists
--- PASS: TestDownloadOnly/v1.26.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.3/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.3/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-397469
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-397469: exit status 85 (54.345801ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-397469 | jenkins | v1.29.0 | 31 Mar 23 17:20 UTC |          |
	|         | -p download-only-397469        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-397469 | jenkins | v1.29.0 | 31 Mar 23 17:20 UTC |          |
	|         | -p download-only-397469        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.26.3   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/03/31 17:20:39
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.20.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0331 17:20:39.584467   10588 out.go:296] Setting OutFile to fd 1 ...
	I0331 17:20:39.584631   10588 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0331 17:20:39.584644   10588 out.go:309] Setting ErrFile to fd 2...
	I0331 17:20:39.584650   10588 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0331 17:20:39.584764   10588 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16144-3494/.minikube/bin
	W0331 17:20:39.584882   10588 root.go:312] Error reading config file at /home/jenkins/minikube-integration/16144-3494/.minikube/config/config.json: open /home/jenkins/minikube-integration/16144-3494/.minikube/config/config.json: no such file or directory
	I0331 17:20:39.585278   10588 out.go:303] Setting JSON to true
	I0331 17:20:39.586086   10588 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":191,"bootTime":1680283049,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1031-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0331 17:20:39.586141   10588 start.go:135] virtualization: kvm guest
	I0331 17:20:39.588677   10588 out.go:97] [download-only-397469] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	I0331 17:20:39.590472   10588 out.go:169] MINIKUBE_LOCATION=16144
	I0331 17:20:39.588896   10588 notify.go:220] Checking for updates...
	I0331 17:20:39.594603   10588 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0331 17:20:39.596535   10588 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/16144-3494/kubeconfig
	I0331 17:20:39.598198   10588 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/16144-3494/.minikube
	I0331 17:20:39.599824   10588 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-397469"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.26.3/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.0-rc.0/json-events (5.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.0-rc.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-397469 --force --alsologtostderr --kubernetes-version=v1.27.0-rc.0 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-397469 --force --alsologtostderr --kubernetes-version=v1.27.0-rc.0 --container-runtime=docker --driver=kvm2 : (5.223740034s)
--- PASS: TestDownloadOnly/v1.27.0-rc.0/json-events (5.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.0-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.0-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.27.0-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.0-rc.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.0-rc.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-397469
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-397469: exit status 85 (55.684196ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only           | download-only-397469 | jenkins | v1.29.0 | 31 Mar 23 17:20 UTC |          |
	|         | -p download-only-397469           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	|         | --driver=kvm2                     |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-397469 | jenkins | v1.29.0 | 31 Mar 23 17:20 UTC |          |
	|         | -p download-only-397469           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.26.3      |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	|         | --driver=kvm2                     |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-397469 | jenkins | v1.29.0 | 31 Mar 23 17:20 UTC |          |
	|         | -p download-only-397469           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.27.0-rc.0 |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	|         | --driver=kvm2                     |                      |         |         |                     |          |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/03/31 17:20:43
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.20.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0331 17:20:43.715517   10626 out.go:296] Setting OutFile to fd 1 ...
	I0331 17:20:43.715638   10626 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0331 17:20:43.715649   10626 out.go:309] Setting ErrFile to fd 2...
	I0331 17:20:43.715656   10626 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0331 17:20:43.715770   10626 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16144-3494/.minikube/bin
	W0331 17:20:43.715882   10626 root.go:312] Error reading config file at /home/jenkins/minikube-integration/16144-3494/.minikube/config/config.json: open /home/jenkins/minikube-integration/16144-3494/.minikube/config/config.json: no such file or directory
	I0331 17:20:43.716272   10626 out.go:303] Setting JSON to true
	I0331 17:20:43.717007   10626 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":195,"bootTime":1680283049,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1031-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0331 17:20:43.717058   10626 start.go:135] virtualization: kvm guest
	I0331 17:20:43.719141   10626 out.go:97] [download-only-397469] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	I0331 17:20:43.720802   10626 out.go:169] MINIKUBE_LOCATION=16144
	I0331 17:20:43.719341   10626 notify.go:220] Checking for updates...
	I0331 17:20:43.723696   10626 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0331 17:20:43.725246   10626 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/16144-3494/kubeconfig
	I0331 17:20:43.726688   10626 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/16144-3494/.minikube
	I0331 17:20:43.728242   10626 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0331 17:20:43.731019   10626 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0331 17:20:43.732129   10626 config.go:182] Loaded profile config "download-only-397469": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.3
	W0331 17:20:43.732187   10626 start.go:767] api.Load failed for download-only-397469: filestore "download-only-397469": Docker machine "download-only-397469" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0331 17:20:43.732249   10626 driver.go:365] Setting default libvirt URI to qemu:///system
	W0331 17:20:43.732281   10626 start.go:767] api.Load failed for download-only-397469: filestore "download-only-397469": Docker machine "download-only-397469" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0331 17:20:43.763967   10626 out.go:97] Using the kvm2 driver based on existing profile
	I0331 17:20:43.763993   10626 start.go:295] selected driver: kvm2
	I0331 17:20:43.763998   10626 start.go:859] validating driver "kvm2" against &{Name:download-only-397469 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16191/minikube-v1.29.0-1680115329-16191-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.26.3 ClusterName:download-only-397469 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0331 17:20:43.764360   10626 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0331 17:20:43.764433   10626 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16144-3494/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0331 17:20:43.778197   10626 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.29.0
	I0331 17:20:43.778847   10626 cni.go:84] Creating CNI manager for ""
	I0331 17:20:43.778870   10626 cni.go:157] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0331 17:20:43.778882   10626 start_flags.go:319] config:
	{Name:download-only-397469 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16191/minikube-v1.29.0-1680115329-16191-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.0-rc.0 ClusterName:download-only-397469 Namespace:default APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP:}
	I0331 17:20:43.779005   10626 iso.go:125] acquiring lock: {Name:mk48583bcdf05c8e72651ed56790356a32c028b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0331 17:20:43.780882   10626 out.go:97] Starting control plane node download-only-397469 in cluster download-only-397469
	I0331 17:20:43.780902   10626 preload.go:132] Checking if preload exists for k8s version v1.27.0-rc.0 and runtime docker
	I0331 17:20:43.804265   10626 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.0-rc.0/preloaded-images-k8s-v18-v1.27.0-rc.0-docker-overlay2-amd64.tar.lz4
	I0331 17:20:43.804301   10626 cache.go:57] Caching tarball of preloaded images
	I0331 17:20:43.804476   10626 preload.go:132] Checking if preload exists for k8s version v1.27.0-rc.0 and runtime docker
	I0331 17:20:43.806518   10626 out.go:97] Downloading Kubernetes v1.27.0-rc.0 preload ...
	I0331 17:20:43.806540   10626 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.27.0-rc.0-docker-overlay2-amd64.tar.lz4 ...
	I0331 17:20:43.834209   10626 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.0-rc.0/preloaded-images-k8s-v18-v1.27.0-rc.0-docker-overlay2-amd64.tar.lz4?checksum=md5:6096a776168534014d2f50b9988b2d60 -> /home/jenkins/minikube-integration/16144-3494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.0-rc.0-docker-overlay2-amd64.tar.lz4
	I0331 17:20:47.350880   10626 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.27.0-rc.0-docker-overlay2-amd64.tar.lz4 ...
	I0331 17:20:47.350969   10626 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/16144-3494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.0-rc.0-docker-overlay2-amd64.tar.lz4 ...
	I0331 17:20:48.153729   10626 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.0-rc.0 on docker
	I0331 17:20:48.153848   10626 profile.go:148] Saving config to /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/download-only-397469/config.json ...
	I0331 17:20:48.154063   10626 preload.go:132] Checking if preload exists for k8s version v1.27.0-rc.0 and runtime docker
	I0331 17:20:48.154302   10626 download.go:107] Downloading: https://dl.k8s.io/release/v1.27.0-rc.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.27.0-rc.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/16144-3494/.minikube/cache/linux/amd64/v1.27.0-rc.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-397469"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.27.0-rc.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.38s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.38s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.36s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-397469
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.36s)

                                                
                                    
x
+
TestBinaryMirror (0.63s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:308: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-815858 --alsologtostderr --binary-mirror http://127.0.0.1:41783 --driver=kvm2 
helpers_test.go:175: Cleaning up "binary-mirror-815858" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-815858
--- PASS: TestBinaryMirror (0.63s)

                                                
                                    
x
+
TestOffline (123.68s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-469894 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-469894 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 : (2m2.45036103s)
helpers_test.go:175: Cleaning up "offline-docker-469894" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-469894
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-469894: (1.225977811s)
--- PASS: TestOffline (123.68s)

                                                
                                    
x
+
TestAddons/Setup (147.47s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p addons-104430 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p addons-104430 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m27.469287414s)
--- PASS: TestAddons/Setup (147.47s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:305: registry stabilized in 25.680679ms
addons_test.go:307: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-qbt4r" [77240f9f-6161-404e-bed0-062601a8077d] Running
addons_test.go:307: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.013150212s
addons_test.go:310: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-vtm2r" [c09fb662-d0b6-40c3-a5c3-b958898d82f4] Running
addons_test.go:310: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.010146834s
addons_test.go:315: (dbg) Run:  kubectl --context addons-104430 delete po -l run=registry-test --now
addons_test.go:320: (dbg) Run:  kubectl --context addons-104430 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:320: (dbg) Done: kubectl --context addons-104430 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.401894433s)
addons_test.go:334: (dbg) Run:  out/minikube-linux-amd64 -p addons-104430 ip
addons_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p addons-104430 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.26s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (24.2s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:182: (dbg) Run:  kubectl --context addons-104430 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:207: (dbg) Run:  kubectl --context addons-104430 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:220: (dbg) Run:  kubectl --context addons-104430 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:225: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [a7ada5cf-b67c-4499-a974-72552796590e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [a7ada5cf-b67c-4499-a974-72552796590e] Running
addons_test.go:225: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 13.013787405s
addons_test.go:237: (dbg) Run:  out/minikube-linux-amd64 -p addons-104430 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:261: (dbg) Run:  kubectl --context addons-104430 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:266: (dbg) Run:  out/minikube-linux-amd64 -p addons-104430 ip
addons_test.go:272: (dbg) Run:  nslookup hello-john.test 192.168.39.50
addons_test.go:281: (dbg) Run:  out/minikube-linux-amd64 -p addons-104430 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:281: (dbg) Done: out/minikube-linux-amd64 -p addons-104430 addons disable ingress-dns --alsologtostderr -v=1: (1.16743699s)
addons_test.go:286: (dbg) Run:  out/minikube-linux-amd64 -p addons-104430 addons disable ingress --alsologtostderr -v=1
2023/03/31 17:23:34 [DEBUG] GET http://192.168.39.50:5000
addons_test.go:286: (dbg) Done: out/minikube-linux-amd64 -p addons-104430 addons disable ingress --alsologtostderr -v=1: (7.647425271s)
--- PASS: TestAddons/parallel/Ingress (24.20s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.02s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:382: metrics-server stabilized in 25.803874ms
addons_test.go:384: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-6588d95b98-kmvpl" [10f0a4c8-4d81-4811-97b2-21dea8a95ffc] Running
addons_test.go:384: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.01680844s
addons_test.go:390: (dbg) Run:  kubectl --context addons-104430 top pods -n kube-system
addons_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p addons-104430 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.02s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.69s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:431: tiller-deploy stabilized in 3.55708ms
addons_test.go:433: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-54cb789455-mqd98" [20e56f3d-ea3f-4bb1-9c14-89ef3c686875] Running
addons_test.go:433: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.014106264s
addons_test.go:448: (dbg) Run:  kubectl --context addons-104430 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:448: (dbg) Done: kubectl --context addons-104430 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (6.135488335s)
addons_test.go:465: (dbg) Run:  out/minikube-linux-amd64 -p addons-104430 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.69s)

                                                
                                    
x
+
TestAddons/parallel/CSI (70.62s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:536: csi-hostpath-driver pods stabilized in 6.486389ms
addons_test.go:539: (dbg) Run:  kubectl --context addons-104430 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:544: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-104430 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-104430 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-104430 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-104430 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-104430 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-104430 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-104430 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-104430 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-104430 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-104430 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-104430 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-104430 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-104430 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-104430 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-104430 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-104430 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-104430 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:549: (dbg) Run:  kubectl --context addons-104430 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:554: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [952b232c-a2db-45fb-8115-54f58d7e3fb8] Pending
helpers_test.go:344: "task-pv-pod" [952b232c-a2db-45fb-8115-54f58d7e3fb8] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [952b232c-a2db-45fb-8115-54f58d7e3fb8] Running
addons_test.go:554: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 16.011591238s
addons_test.go:559: (dbg) Run:  kubectl --context addons-104430 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:564: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-104430 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-104430 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:569: (dbg) Run:  kubectl --context addons-104430 delete pod task-pv-pod
addons_test.go:569: (dbg) Done: kubectl --context addons-104430 delete pod task-pv-pod: (1.371683999s)
addons_test.go:575: (dbg) Run:  kubectl --context addons-104430 delete pvc hpvc
addons_test.go:581: (dbg) Run:  kubectl --context addons-104430 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:586: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-104430 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-104430 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-104430 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-104430 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-104430 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-104430 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-104430 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-104430 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-104430 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-104430 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-104430 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-104430 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-104430 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-104430 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-104430 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-104430 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:591: (dbg) Run:  kubectl --context addons-104430 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:596: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [54cf545c-7c62-492b-960f-a0bd9620a120] Pending
helpers_test.go:344: "task-pv-pod-restore" [54cf545c-7c62-492b-960f-a0bd9620a120] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [54cf545c-7c62-492b-960f-a0bd9620a120] Running
addons_test.go:596: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 10.013423046s
addons_test.go:601: (dbg) Run:  kubectl --context addons-104430 delete pod task-pv-pod-restore
addons_test.go:601: (dbg) Done: kubectl --context addons-104430 delete pod task-pv-pod-restore: (1.432530136s)
addons_test.go:605: (dbg) Run:  kubectl --context addons-104430 delete pvc hpvc-restore
addons_test.go:609: (dbg) Run:  kubectl --context addons-104430 delete volumesnapshot new-snapshot-demo
addons_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p addons-104430 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:613: (dbg) Done: out/minikube-linux-amd64 -p addons-104430 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.602354103s)
addons_test.go:617: (dbg) Run:  out/minikube-linux-amd64 -p addons-104430 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (70.62s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12.66s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:799: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-104430 --alsologtostderr -v=1
addons_test.go:799: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-104430 --alsologtostderr -v=1: (1.648098154s)
addons_test.go:804: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-58c48fc87f-w9mts" [9812958e-f6c9-4453-a2d9-7f676e797844] Pending
helpers_test.go:344: "headlamp-58c48fc87f-w9mts" [9812958e-f6c9-4453-a2d9-7f676e797844] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-58c48fc87f-w9mts" [9812958e-f6c9-4453-a2d9-7f676e797844] Running
addons_test.go:804: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.014114752s
--- PASS: TestAddons/parallel/Headlamp (12.66s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.7s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:820: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5dd65ff88c-lzdnq" [9f5030f9-b29e-416a-a892-8f0ed539793a] Running
addons_test.go:820: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.008877517s
addons_test.go:823: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-104430
--- PASS: TestAddons/parallel/CloudSpanner (5.70s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:625: (dbg) Run:  kubectl --context addons-104430 create ns new-namespace
addons_test.go:639: (dbg) Run:  kubectl --context addons-104430 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (13.4s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:147: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-104430
addons_test.go:147: (dbg) Done: out/minikube-linux-amd64 stop -p addons-104430: (13.194356213s)
addons_test.go:151: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-104430
addons_test.go:155: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-104430
addons_test.go:160: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-104430
--- PASS: TestAddons/StoppedEnableDisable (13.40s)

                                                
                                    
x
+
TestCertOptions (91.17s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-885841 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-885841 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 : (1m29.623666598s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-885841 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-885841 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-885841 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-885841" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-885841
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-885841: (1.079495108s)
--- PASS: TestCertOptions (91.17s)

                                                
                                    
x
+
TestCertExpiration (276.96s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-549601 --memory=2048 --cert-expiration=3m --driver=kvm2 
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-549601 --memory=2048 --cert-expiration=3m --driver=kvm2 : (53.294555643s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-549601 --memory=2048 --cert-expiration=8760h --driver=kvm2 
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-549601 --memory=2048 --cert-expiration=8760h --driver=kvm2 : (42.141574836s)
helpers_test.go:175: Cleaning up "cert-expiration-549601" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-549601
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-549601: (1.526188682s)
--- PASS: TestCertExpiration (276.96s)

                                                
                                    
x
+
TestDockerFlags (132.99s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-694274 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:45: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-694274 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 : (2m11.003262382s)
docker_test.go:50: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-694274 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:61: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-694274 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-694274" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-694274
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-694274: (1.393365663s)
--- PASS: TestDockerFlags (132.99s)

                                                
                                    
x
+
TestForceSystemdFlag (79.5s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-498658 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-498658 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 : (1m18.076244805s)
docker_test.go:104: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-498658 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-498658" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-498658
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-498658: (1.120377732s)
--- PASS: TestForceSystemdFlag (79.50s)

                                                
                                    
x
+
TestForceSystemdEnv (65.06s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-066234 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:149: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-066234 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 : (1m3.677254496s)
docker_test.go:104: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-066234 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-066234" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-066234
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-066234: (1.109316278s)
--- PASS: TestForceSystemdEnv (65.06s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.6s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (3.60s)

                                                
                                    
x
+
TestErrorSpam/setup (52.4s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-026624 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-026624 --driver=kvm2 
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-026624 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-026624 --driver=kvm2 : (52.404758642s)
--- PASS: TestErrorSpam/setup (52.40s)

                                                
                                    
x
+
TestErrorSpam/start (0.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-026624 --log_dir /tmp/nospam-026624 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-026624 --log_dir /tmp/nospam-026624 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-026624 --log_dir /tmp/nospam-026624 start --dry-run
--- PASS: TestErrorSpam/start (0.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.71s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-026624 --log_dir /tmp/nospam-026624 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-026624 --log_dir /tmp/nospam-026624 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-026624 --log_dir /tmp/nospam-026624 status
--- PASS: TestErrorSpam/status (0.71s)

                                                
                                    
x
+
TestErrorSpam/pause (1.17s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-026624 --log_dir /tmp/nospam-026624 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-026624 --log_dir /tmp/nospam-026624 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-026624 --log_dir /tmp/nospam-026624 pause
--- PASS: TestErrorSpam/pause (1.17s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.31s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-026624 --log_dir /tmp/nospam-026624 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-026624 --log_dir /tmp/nospam-026624 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-026624 --log_dir /tmp/nospam-026624 unpause
--- PASS: TestErrorSpam/unpause (1.31s)

                                                
                                    
x
+
TestErrorSpam/stop (13.22s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-026624 --log_dir /tmp/nospam-026624 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-026624 --log_dir /tmp/nospam-026624 stop: (13.092106988s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-026624 --log_dir /tmp/nospam-026624 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-026624 --log_dir /tmp/nospam-026624 stop
--- PASS: TestErrorSpam/stop (13.22s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1850: local sync path: /home/jenkins/minikube-integration/16144-3494/.minikube/files/etc/test/nested/copy/10540/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (70.04s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2229: (dbg) Run:  out/minikube-linux-amd64 start -p functional-217220 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 
functional_test.go:2229: (dbg) Done: out/minikube-linux-amd64 start -p functional-217220 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 : (1m10.037953814s)
--- PASS: TestFunctional/serial/StartWithProxy (70.04s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (42.26s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:654: (dbg) Run:  out/minikube-linux-amd64 start -p functional-217220 --alsologtostderr -v=8
functional_test.go:654: (dbg) Done: out/minikube-linux-amd64 start -p functional-217220 --alsologtostderr -v=8: (42.255960608s)
functional_test.go:658: soft start took 42.256690417s for "functional-217220" cluster.
--- PASS: TestFunctional/serial/SoftStart (42.26s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:676: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:691: (dbg) Run:  kubectl --context functional-217220 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.93s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1044: (dbg) Run:  out/minikube-linux-amd64 -p functional-217220 cache add registry.k8s.io/pause:3.1
functional_test.go:1044: (dbg) Run:  out/minikube-linux-amd64 -p functional-217220 cache add registry.k8s.io/pause:3.3
functional_test.go:1044: (dbg) Done: out/minikube-linux-amd64 -p functional-217220 cache add registry.k8s.io/pause:3.3: (1.011153302s)
functional_test.go:1044: (dbg) Run:  out/minikube-linux-amd64 -p functional-217220 cache add registry.k8s.io/pause:latest
functional_test.go:1044: (dbg) Done: out/minikube-linux-amd64 -p functional-217220 cache add registry.k8s.io/pause:latest: (1.000765558s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.93s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.44s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1072: (dbg) Run:  docker build -t minikube-local-cache-test:functional-217220 /tmp/TestFunctionalserialCacheCmdcacheadd_local3104453227/001
functional_test.go:1084: (dbg) Run:  out/minikube-linux-amd64 -p functional-217220 cache add minikube-local-cache-test:functional-217220
functional_test.go:1084: (dbg) Done: out/minikube-linux-amd64 -p functional-217220 cache add minikube-local-cache-test:functional-217220: (1.078541512s)
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-217220 cache delete minikube-local-cache-test:functional-217220
functional_test.go:1078: (dbg) Run:  docker rmi minikube-local-cache-test:functional-217220
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.44s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1097: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1105: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1119: (dbg) Run:  out/minikube-linux-amd64 -p functional-217220 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.39s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1142: (dbg) Run:  out/minikube-linux-amd64 -p functional-217220 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1148: (dbg) Run:  out/minikube-linux-amd64 -p functional-217220 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1148: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-217220 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (202.910703ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-217220 cache reload
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-217220 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.39s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1167: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1167: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:711: (dbg) Run:  out/minikube-linux-amd64 -p functional-217220 kubectl -- --context functional-217220 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:736: (dbg) Run:  out/kubectl --context functional-217220 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (49.37s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:752: (dbg) Run:  out/minikube-linux-amd64 start -p functional-217220 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0331 17:28:18.153580   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/addons-104430/client.crt: no such file or directory
E0331 17:28:18.159572   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/addons-104430/client.crt: no such file or directory
E0331 17:28:18.169828   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/addons-104430/client.crt: no such file or directory
E0331 17:28:18.190136   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/addons-104430/client.crt: no such file or directory
E0331 17:28:18.230475   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/addons-104430/client.crt: no such file or directory
E0331 17:28:18.310846   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/addons-104430/client.crt: no such file or directory
E0331 17:28:18.471239   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/addons-104430/client.crt: no such file or directory
E0331 17:28:18.791808   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/addons-104430/client.crt: no such file or directory
E0331 17:28:19.432771   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/addons-104430/client.crt: no such file or directory
E0331 17:28:20.713369   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/addons-104430/client.crt: no such file or directory
E0331 17:28:23.275232   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/addons-104430/client.crt: no such file or directory
E0331 17:28:28.395622   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/addons-104430/client.crt: no such file or directory
E0331 17:28:38.636508   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/addons-104430/client.crt: no such file or directory
E0331 17:28:59.117137   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/addons-104430/client.crt: no such file or directory
functional_test.go:752: (dbg) Done: out/minikube-linux-amd64 start -p functional-217220 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (49.368023014s)
functional_test.go:756: restart took 49.368133948s for "functional-217220" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (49.37s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:805: (dbg) Run:  kubectl --context functional-217220 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:820: etcd phase: Running
functional_test.go:830: etcd status: Ready
functional_test.go:820: kube-apiserver phase: Running
functional_test.go:830: kube-apiserver status: Ready
functional_test.go:820: kube-controller-manager phase: Running
functional_test.go:830: kube-controller-manager status: Ready
functional_test.go:820: kube-scheduler phase: Running
functional_test.go:830: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.17s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1231: (dbg) Run:  out/minikube-linux-amd64 -p functional-217220 logs
functional_test.go:1231: (dbg) Done: out/minikube-linux-amd64 -p functional-217220 logs: (1.169289351s)
--- PASS: TestFunctional/serial/LogsCmd (1.17s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.27s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1245: (dbg) Run:  out/minikube-linux-amd64 -p functional-217220 logs --file /tmp/TestFunctionalserialLogsFileCmd2474709236/001/logs.txt
functional_test.go:1245: (dbg) Done: out/minikube-linux-amd64 -p functional-217220 logs --file /tmp/TestFunctionalserialLogsFileCmd2474709236/001/logs.txt: (1.264570372s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-217220 config unset cpus
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-217220 config get cpus
functional_test.go:1194: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-217220 config get cpus: exit status 14 (50.619163ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-217220 config set cpus 2
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-217220 config get cpus
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-217220 config unset cpus
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-217220 config get cpus
functional_test.go:1194: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-217220 config get cpus: exit status 14 (48.096429ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (36.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:900: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-217220 --alsologtostderr -v=1]
functional_test.go:905: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-217220 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 17171: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (36.91s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:969: (dbg) Run:  out/minikube-linux-amd64 start -p functional-217220 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:969: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-217220 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (141.510623ms)

                                                
                                                
-- stdout --
	* [functional-217220] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16144
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16144-3494/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16144-3494/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0331 17:29:28.532812   17053 out.go:296] Setting OutFile to fd 1 ...
	I0331 17:29:28.532983   17053 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0331 17:29:28.532994   17053 out.go:309] Setting ErrFile to fd 2...
	I0331 17:29:28.533002   17053 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0331 17:29:28.533159   17053 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16144-3494/.minikube/bin
	I0331 17:29:28.533861   17053 out.go:303] Setting JSON to false
	I0331 17:29:28.535129   17053 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":720,"bootTime":1680283049,"procs":257,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1031-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0331 17:29:28.535200   17053 start.go:135] virtualization: kvm guest
	I0331 17:29:28.538172   17053 out.go:177] * [functional-217220] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	I0331 17:29:28.540044   17053 out.go:177]   - MINIKUBE_LOCATION=16144
	I0331 17:29:28.539983   17053 notify.go:220] Checking for updates...
	I0331 17:29:28.541624   17053 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0331 17:29:28.543247   17053 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16144-3494/kubeconfig
	I0331 17:29:28.544734   17053 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16144-3494/.minikube
	I0331 17:29:28.546219   17053 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0331 17:29:28.547784   17053 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0331 17:29:28.550149   17053 config.go:182] Loaded profile config "functional-217220": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.3
	I0331 17:29:28.552185   17053 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0331 17:29:28.552244   17053 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0331 17:29:28.567550   17053 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35749
	I0331 17:29:28.568006   17053 main.go:141] libmachine: () Calling .GetVersion
	I0331 17:29:28.568550   17053 main.go:141] libmachine: Using API Version  1
	I0331 17:29:28.568576   17053 main.go:141] libmachine: () Calling .SetConfigRaw
	I0331 17:29:28.568988   17053 main.go:141] libmachine: () Calling .GetMachineName
	I0331 17:29:28.569188   17053 main.go:141] libmachine: (functional-217220) Calling .DriverName
	I0331 17:29:28.569371   17053 driver.go:365] Setting default libvirt URI to qemu:///system
	I0331 17:29:28.569667   17053 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0331 17:29:28.569690   17053 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0331 17:29:28.584127   17053 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34155
	I0331 17:29:28.584520   17053 main.go:141] libmachine: () Calling .GetVersion
	I0331 17:29:28.584997   17053 main.go:141] libmachine: Using API Version  1
	I0331 17:29:28.585016   17053 main.go:141] libmachine: () Calling .SetConfigRaw
	I0331 17:29:28.585383   17053 main.go:141] libmachine: () Calling .GetMachineName
	I0331 17:29:28.585587   17053 main.go:141] libmachine: (functional-217220) Calling .DriverName
	I0331 17:29:28.620174   17053 out.go:177] * Using the kvm2 driver based on existing profile
	I0331 17:29:28.621669   17053 start.go:295] selected driver: kvm2
	I0331 17:29:28.621686   17053 start.go:859] validating driver "kvm2" against &{Name:functional-217220 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16191/minikube-v1.29.0-1680115329-16191-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.2
6.3 ClusterName:functional-217220 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.123 Port:8441 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0331 17:29:28.621822   17053 start.go:870] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0331 17:29:28.624018   17053 out.go:177] 
	W0331 17:29:28.625636   17053 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0331 17:29:28.627132   17053 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:986: (dbg) Run:  out/minikube-linux-amd64 start -p functional-217220 --dry-run --alsologtostderr -v=1 --driver=kvm2 
--- PASS: TestFunctional/parallel/DryRun (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1015: (dbg) Run:  out/minikube-linux-amd64 start -p functional-217220 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:1015: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-217220 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (133.913061ms)

                                                
                                                
-- stdout --
	* [functional-217220] minikube v1.29.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16144
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16144-3494/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16144-3494/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0331 17:29:28.842271   17108 out.go:296] Setting OutFile to fd 1 ...
	I0331 17:29:28.842391   17108 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0331 17:29:28.842402   17108 out.go:309] Setting ErrFile to fd 2...
	I0331 17:29:28.842407   17108 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0331 17:29:28.842575   17108 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16144-3494/.minikube/bin
	I0331 17:29:28.843116   17108 out.go:303] Setting JSON to false
	I0331 17:29:28.844217   17108 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":720,"bootTime":1680283049,"procs":261,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1031-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0331 17:29:28.844276   17108 start.go:135] virtualization: kvm guest
	I0331 17:29:28.846625   17108 out.go:177] * [functional-217220] minikube v1.29.0 sur Ubuntu 20.04 (kvm/amd64)
	I0331 17:29:28.848714   17108 notify.go:220] Checking for updates...
	I0331 17:29:28.848723   17108 out.go:177]   - MINIKUBE_LOCATION=16144
	I0331 17:29:28.850224   17108 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0331 17:29:28.851763   17108 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16144-3494/kubeconfig
	I0331 17:29:28.853102   17108 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16144-3494/.minikube
	I0331 17:29:28.854313   17108 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0331 17:29:28.856640   17108 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0331 17:29:28.858484   17108 config.go:182] Loaded profile config "functional-217220": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.3
	I0331 17:29:28.858875   17108 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0331 17:29:28.858925   17108 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0331 17:29:28.873876   17108 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36381
	I0331 17:29:28.874246   17108 main.go:141] libmachine: () Calling .GetVersion
	I0331 17:29:28.874796   17108 main.go:141] libmachine: Using API Version  1
	I0331 17:29:28.874817   17108 main.go:141] libmachine: () Calling .SetConfigRaw
	I0331 17:29:28.875214   17108 main.go:141] libmachine: () Calling .GetMachineName
	I0331 17:29:28.875438   17108 main.go:141] libmachine: (functional-217220) Calling .DriverName
	I0331 17:29:28.875650   17108 driver.go:365] Setting default libvirt URI to qemu:///system
	I0331 17:29:28.875941   17108 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0331 17:29:28.875974   17108 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0331 17:29:28.890269   17108 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42077
	I0331 17:29:28.890684   17108 main.go:141] libmachine: () Calling .GetVersion
	I0331 17:29:28.891167   17108 main.go:141] libmachine: Using API Version  1
	I0331 17:29:28.891199   17108 main.go:141] libmachine: () Calling .SetConfigRaw
	I0331 17:29:28.891557   17108 main.go:141] libmachine: () Calling .GetMachineName
	I0331 17:29:28.891738   17108 main.go:141] libmachine: (functional-217220) Calling .DriverName
	I0331 17:29:28.926680   17108 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0331 17:29:28.928479   17108 start.go:295] selected driver: kvm2
	I0331 17:29:28.928494   17108 start.go:859] validating driver "kvm2" against &{Name:functional-217220 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16191/minikube-v1.29.0-1680115329-16191-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.2
6.3 ClusterName:functional-217220 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.123 Port:8441 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0331 17:29:28.928602   17108 start.go:870] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0331 17:29:28.931015   17108 out.go:177] 
	W0331 17:29:28.932800   17108 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0331 17:29:28.934326   17108 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:849: (dbg) Run:  out/minikube-linux-amd64 -p functional-217220 status
functional_test.go:855: (dbg) Run:  out/minikube-linux-amd64 -p functional-217220 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:867: (dbg) Run:  out/minikube-linux-amd64 -p functional-217220 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (13.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-217220 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1633: (dbg) Run:  kubectl --context functional-217220 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1638: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-75df956f7d-nxhlx" [fecb27aa-d401-4642-8618-e671510524e2] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-75df956f7d-nxhlx" [fecb27aa-d401-4642-8618-e671510524e2] Running
functional_test.go:1638: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 13.014942516s
functional_test.go:1647: (dbg) Run:  out/minikube-linux-amd64 -p functional-217220 service hello-node-connect --url
functional_test.go:1653: found endpoint for hello-node-connect: http://192.168.39.123:31733
functional_test.go:1673: http://192.168.39.123:31733: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-75df956f7d-nxhlx

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.123:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.123:31733
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (13.82s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1688: (dbg) Run:  out/minikube-linux-amd64 -p functional-217220 addons list
functional_test.go:1700: (dbg) Run:  out/minikube-linux-amd64 -p functional-217220 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (57.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [948bdcf9-4cf2-45f7-881d-20ea5a9a3a0c] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.010535191s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-217220 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-217220 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-217220 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-217220 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-217220 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [c73f0af1-4c49-46e6-b946-7b76126f70ed] Pending
helpers_test.go:344: "sp-pod" [c73f0af1-4c49-46e6-b946-7b76126f70ed] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [c73f0af1-4c49-46e6-b946-7b76126f70ed] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 17.012459879s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-217220 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-217220 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-217220 delete -f testdata/storage-provisioner/pod.yaml: (1.171092686s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-217220 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [a96ed61d-4998-47d3-a803-42433f2a4875] Pending
helpers_test.go:344: "sp-pod" [a96ed61d-4998-47d3-a803-42433f2a4875] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [a96ed61d-4998-47d3-a803-42433f2a4875] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 31.016061024s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-217220 exec sp-pod -- ls /tmp/mount
2023/03/31 17:30:05 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (57.58s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1723: (dbg) Run:  out/minikube-linux-amd64 -p functional-217220 ssh "echo hello"
functional_test.go:1740: (dbg) Run:  out/minikube-linux-amd64 -p functional-217220 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-217220 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-217220 ssh -n functional-217220 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-217220 cp functional-217220:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd741836623/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-217220 ssh -n functional-217220 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (35.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1788: (dbg) Run:  kubectl --context functional-217220 replace --force -f testdata/mysql.yaml
functional_test.go:1794: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-888f84dd9-h8gv7" [5c07c9a1-1f3b-416e-bd63-96ad08dbd581] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-888f84dd9-h8gv7" [5c07c9a1-1f3b-416e-bd63-96ad08dbd581] Running
functional_test.go:1794: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 27.011320694s
functional_test.go:1802: (dbg) Run:  kubectl --context functional-217220 exec mysql-888f84dd9-h8gv7 -- mysql -ppassword -e "show databases;"
functional_test.go:1802: (dbg) Non-zero exit: kubectl --context functional-217220 exec mysql-888f84dd9-h8gv7 -- mysql -ppassword -e "show databases;": exit status 1 (267.793999ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1802: (dbg) Run:  kubectl --context functional-217220 exec mysql-888f84dd9-h8gv7 -- mysql -ppassword -e "show databases;"
functional_test.go:1802: (dbg) Non-zero exit: kubectl --context functional-217220 exec mysql-888f84dd9-h8gv7 -- mysql -ppassword -e "show databases;": exit status 1 (224.004516ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1802: (dbg) Run:  kubectl --context functional-217220 exec mysql-888f84dd9-h8gv7 -- mysql -ppassword -e "show databases;"
functional_test.go:1802: (dbg) Non-zero exit: kubectl --context functional-217220 exec mysql-888f84dd9-h8gv7 -- mysql -ppassword -e "show databases;": exit status 1 (233.380415ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1802: (dbg) Run:  kubectl --context functional-217220 exec mysql-888f84dd9-h8gv7 -- mysql -ppassword -e "show databases;"
functional_test.go:1802: (dbg) Non-zero exit: kubectl --context functional-217220 exec mysql-888f84dd9-h8gv7 -- mysql -ppassword -e "show databases;": exit status 1 (225.336887ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1802: (dbg) Run:  kubectl --context functional-217220 exec mysql-888f84dd9-h8gv7 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (35.75s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1924: Checking for existence of /etc/test/nested/copy/10540/hosts within VM
functional_test.go:1926: (dbg) Run:  out/minikube-linux-amd64 -p functional-217220 ssh "sudo cat /etc/test/nested/copy/10540/hosts"
functional_test.go:1931: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1967: Checking for existence of /etc/ssl/certs/10540.pem within VM
functional_test.go:1968: (dbg) Run:  out/minikube-linux-amd64 -p functional-217220 ssh "sudo cat /etc/ssl/certs/10540.pem"
functional_test.go:1967: Checking for existence of /usr/share/ca-certificates/10540.pem within VM
functional_test.go:1968: (dbg) Run:  out/minikube-linux-amd64 -p functional-217220 ssh "sudo cat /usr/share/ca-certificates/10540.pem"
functional_test.go:1967: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1968: (dbg) Run:  out/minikube-linux-amd64 -p functional-217220 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1994: Checking for existence of /etc/ssl/certs/105402.pem within VM
functional_test.go:1995: (dbg) Run:  out/minikube-linux-amd64 -p functional-217220 ssh "sudo cat /etc/ssl/certs/105402.pem"
functional_test.go:1994: Checking for existence of /usr/share/ca-certificates/105402.pem within VM
functional_test.go:1995: (dbg) Run:  out/minikube-linux-amd64 -p functional-217220 ssh "sudo cat /usr/share/ca-certificates/105402.pem"
functional_test.go:1994: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1995: (dbg) Run:  out/minikube-linux-amd64 -p functional-217220 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.61s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:217: (dbg) Run:  kubectl --context functional-217220 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2022: (dbg) Run:  out/minikube-linux-amd64 -p functional-217220 ssh "sudo systemctl is-active crio"
functional_test.go:2022: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-217220 ssh "sudo systemctl is-active crio": exit status 1 (231.882539ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2283: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2251: (dbg) Run:  out/minikube-linux-amd64 -p functional-217220 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2265: (dbg) Run:  out/minikube-linux-amd64 -p functional-217220 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p functional-217220 image ls --format short
functional_test.go:264: (dbg) Stdout: out/minikube-linux-amd64 -p functional-217220 image ls --format short:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.26.3
registry.k8s.io/kube-proxy:v1.26.3
registry.k8s.io/kube-controller-manager:v1.26.3
registry.k8s.io/kube-apiserver:v1.26.3
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.9.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-217220
docker.io/library/nginx:latest
docker.io/library/minikube-local-cache-test:functional-217220
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p functional-217220 image ls --format table
functional_test.go:264: (dbg) Stdout: out/minikube-linux-amd64 -p functional-217220 image ls --format table:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/nginx                     | latest            | 080ed0ed8312d | 142MB  |
| registry.k8s.io/kube-proxy                  | v1.26.3           | 92ed2bec97a63 | 65.6MB |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| docker.io/library/minikube-local-cache-test | functional-217220 | 53f6e1cacecea | 30B    |
| registry.k8s.io/etcd                        | 3.5.6-0           | fce326961ae2d | 299MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| registry.k8s.io/kube-scheduler              | v1.26.3           | 5a79047369329 | 56.4MB |
| registry.k8s.io/coredns/coredns             | v1.9.3            | 5185b96f0becf | 48.8MB |
| gcr.io/k8s-minikube/busybox                 | latest            | beae173ccac6a | 1.24MB |
| gcr.io/google-containers/addon-resizer      | functional-217220 | ffd4cfbbe753e | 32.9MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| docker.io/localhost/my-image                | functional-217220 | a7ce556b52b7a | 1.24MB |
| registry.k8s.io/kube-apiserver              | v1.26.3           | 1d9b3cbae03ce | 134MB  |
| registry.k8s.io/kube-controller-manager     | v1.26.3           | ce8c2293ef09c | 123MB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
|---------------------------------------------|-------------------|---------------|--------|
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p functional-217220 image ls --format json
functional_test.go:264: (dbg) Stdout: out/minikube-linux-amd64 -p functional-217220 image ls --format json:
[{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"5a79047369329dff4a02e705e650664d2019e583b802416447a6a17e9debb62d","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.26.3"],"size":"56400000"},{"id":"5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.9.3"],"size":"48800000"},{"id":"92ed2bec97a637010666d6c4aa4d69b672baec0fd5d236d142e4227a3a0557d8","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.26.3"],"size":"65599999"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50e
d43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"a7ce556b52b7accf9e8431ebbaa762c11da0dc7cac6de25c0c52ff0c4db192a3","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-217220"],"size":"1240000"},{"id":"ce8c2293ef09c9987773345638026f9f7aed16bc52e7a6ea507f0c655ab17161","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.26.3"],"size":"123000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-217220"],"size":"32900000"},{"id":"080ed0ed8312deca92e9a769b518cdfa20f5278359bd156f3469dd8fa532db6b","repoDigests":[],"repoTa
gs":["docker.io/library/nginx:latest"],"size":"142000000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.6-0"],"size":"299000000"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1240000"},{"id":"53f6e1caceceacdd5b3a05bcd3ec80dde8a04cef4f194dc33472662fd56cb2d1","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-217220"],"size":"30"},{"id":"1d9b3cbae03cea2a1766cfa5bf06a5a9c7a7bdbc6f5322756e29ac78e76f2708","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.26.3"],"size":"134000000"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p functional-217220 image ls --format yaml
functional_test.go:264: (dbg) Stdout: out/minikube-linux-amd64 -p functional-217220 image ls --format yaml:
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-217220
size: "32900000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 5a79047369329dff4a02e705e650664d2019e583b802416447a6a17e9debb62d
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.26.3
size: "56400000"
- id: 1d9b3cbae03cea2a1766cfa5bf06a5a9c7a7bdbc6f5322756e29ac78e76f2708
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.26.3
size: "134000000"
- id: ce8c2293ef09c9987773345638026f9f7aed16bc52e7a6ea507f0c655ab17161
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.26.3
size: "123000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 53f6e1caceceacdd5b3a05bcd3ec80dde8a04cef4f194dc33472662fd56cb2d1
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-217220
size: "30"
- id: 080ed0ed8312deca92e9a769b518cdfa20f5278359bd156f3469dd8fa532db6b
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "142000000"
- id: 92ed2bec97a637010666d6c4aa4d69b672baec0fd5d236d142e4227a3a0557d8
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.26.3
size: "65599999"
- id: fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.6-0
size: "299000000"
- id: 5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.9.3
size: "48800000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p functional-217220 ssh pgrep buildkitd
functional_test.go:306: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-217220 ssh pgrep buildkitd: exit status 1 (188.667146ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p functional-217220 image build -t localhost/my-image:functional-217220 testdata/build
E0331 17:29:40.077514   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/addons-104430/client.crt: no such file or directory
functional_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p functional-217220 image build -t localhost/my-image:functional-217220 testdata/build: (3.614438112s)
functional_test.go:318: (dbg) Stdout: out/minikube-linux-amd64 -p functional-217220 image build -t localhost/my-image:functional-217220 testdata/build:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in d9d100af0ef0
Removing intermediate container d9d100af0ef0
---> bcae07e34efd
Step 3/3 : ADD content.txt /
---> a7ce556b52b7
Successfully built a7ce556b52b7
Successfully tagged localhost/my-image:functional-217220
functional_test.go:446: (dbg) Run:  out/minikube-linux-amd64 -p functional-217220 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:340: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:340: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.190545188s)
functional_test.go:345: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-217220
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (14.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-217220 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1443: (dbg) Run:  kubectl --context functional-217220 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1448: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-7f895565f7-4fzhr" [8478a73e-f2e1-4ff4-ae46-d5c1a42fcc22] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-7f895565f7-4fzhr" [8478a73e-f2e1-4ff4-ae46-d5c1a42fcc22] Running
functional_test.go:1448: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 14.017487829s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (14.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:353: (dbg) Run:  out/minikube-linux-amd64 -p functional-217220 image load --daemon gcr.io/google-containers/addon-resizer:functional-217220
functional_test.go:353: (dbg) Done: out/minikube-linux-amd64 -p functional-217220 image load --daemon gcr.io/google-containers/addon-resizer:functional-217220: (4.167728519s)
functional_test.go:446: (dbg) Run:  out/minikube-linux-amd64 -p functional-217220 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p functional-217220 image load --daemon gcr.io/google-containers/addon-resizer:functional-217220
functional_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p functional-217220 image load --daemon gcr.io/google-containers/addon-resizer:functional-217220: (2.287450327s)
functional_test.go:446: (dbg) Run:  out/minikube-linux-amd64 -p functional-217220 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:233: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:233: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.247102974s)
functional_test.go:238: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-217220
functional_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-217220 image load --daemon gcr.io/google-containers/addon-resizer:functional-217220
functional_test.go:243: (dbg) Done: out/minikube-linux-amd64 -p functional-217220 image load --daemon gcr.io/google-containers/addon-resizer:functional-217220: (4.025265529s)
functional_test.go:446: (dbg) Run:  out/minikube-linux-amd64 -p functional-217220 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:378: (dbg) Run:  out/minikube-linux-amd64 -p functional-217220 image save gcr.io/google-containers/addon-resizer:functional-217220 /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar
functional_test.go:378: (dbg) Done: out/minikube-linux-amd64 -p functional-217220 image save gcr.io/google-containers/addon-resizer:functional-217220 /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar: (2.281407s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1457: (dbg) Run:  out/minikube-linux-amd64 -p functional-217220 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1487: (dbg) Run:  out/minikube-linux-amd64 -p functional-217220 service list -o json
functional_test.go:1492: Took "358.669463ms" to run "out/minikube-linux-amd64 -p functional-217220 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1507: (dbg) Run:  out/minikube-linux-amd64 -p functional-217220 service --namespace=default --https --url hello-node
functional_test.go:1520: found endpoint: https://192.168.39.123:32123
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1538: (dbg) Run:  out/minikube-linux-amd64 -p functional-217220 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1557: (dbg) Run:  out/minikube-linux-amd64 -p functional-217220 service hello-node --url
functional_test.go:1563: found endpoint for hello-node: http://192.168.39.123:32123
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p functional-217220 image rm gcr.io/google-containers/addon-resizer:functional-217220
functional_test.go:446: (dbg) Run:  out/minikube-linux-amd64 -p functional-217220 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:494: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-217220 docker-env) && out/minikube-linux-amd64 status -p functional-217220"
functional_test.go:517: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-217220 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2114: (dbg) Run:  out/minikube-linux-amd64 -p functional-217220 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2114: (dbg) Run:  out/minikube-linux-amd64 -p functional-217220 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2114: (dbg) Run:  out/minikube-linux-amd64 -p functional-217220 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-217220 image load /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar
functional_test.go:407: (dbg) Done: out/minikube-linux-amd64 -p functional-217220 image load /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar: (2.107224483s)
functional_test.go:446: (dbg) Run:  out/minikube-linux-amd64 -p functional-217220 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.33s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1268: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1273: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:69: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-217220 /tmp/TestFunctionalparallelMountCmdany-port2786968901/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:103: wrote "test-1680283763897262453" to /tmp/TestFunctionalparallelMountCmdany-port2786968901/001/created-by-test
functional_test_mount_test.go:103: wrote "test-1680283763897262453" to /tmp/TestFunctionalparallelMountCmdany-port2786968901/001/created-by-test-removed-by-pod
functional_test_mount_test.go:103: wrote "test-1680283763897262453" to /tmp/TestFunctionalparallelMountCmdany-port2786968901/001/test-1680283763897262453
functional_test_mount_test.go:111: (dbg) Run:  out/minikube-linux-amd64 -p functional-217220 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:111: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-217220 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (238.074323ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:111: (dbg) Run:  out/minikube-linux-amd64 -p functional-217220 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:125: (dbg) Run:  out/minikube-linux-amd64 -p functional-217220 ssh -- ls -la /mount-9p
functional_test_mount_test.go:129: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Mar 31 17:29 created-by-test
-rw-r--r-- 1 docker docker 24 Mar 31 17:29 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Mar 31 17:29 test-1680283763897262453
functional_test_mount_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p functional-217220 ssh cat /mount-9p/test-1680283763897262453
functional_test_mount_test.go:144: (dbg) Run:  kubectl --context functional-217220 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:149: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [b21beff2-6663-4a7a-9b74-608818ba1b6a] Pending
helpers_test.go:344: "busybox-mount" [b21beff2-6663-4a7a-9b74-608818ba1b6a] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [b21beff2-6663-4a7a-9b74-608818ba1b6a] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [b21beff2-6663-4a7a-9b74-608818ba1b6a] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:149: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.007859513s
functional_test_mount_test.go:165: (dbg) Run:  kubectl --context functional-217220 logs busybox-mount
functional_test_mount_test.go:177: (dbg) Run:  out/minikube-linux-amd64 -p functional-217220 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:177: (dbg) Run:  out/minikube-linux-amd64 -p functional-217220 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:86: (dbg) Run:  out/minikube-linux-amd64 -p functional-217220 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-217220 /tmp/TestFunctionalparallelMountCmdany-port2786968901/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.68s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1308: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1313: Took "258.108573ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1322: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1327: Took "43.15507ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1359: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1364: Took "306.33625ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1372: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1377: Took "59.523422ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (3.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:417: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-217220
functional_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p functional-217220 image save --daemon gcr.io/google-containers/addon-resizer:functional-217220
functional_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p functional-217220 image save --daemon gcr.io/google-containers/addon-resizer:functional-217220: (2.969767749s)
functional_test.go:427: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-217220
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (3.13s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:209: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-217220 /tmp/TestFunctionalparallelMountCmdspecific-port3340900299/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:239: (dbg) Run:  out/minikube-linux-amd64 -p functional-217220 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-217220 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (227.101792ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:239: (dbg) Run:  out/minikube-linux-amd64 -p functional-217220 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:253: (dbg) Run:  out/minikube-linux-amd64 -p functional-217220 ssh -- ls -la /mount-9p
functional_test_mount_test.go:257: guest mount directory contents
total 0
functional_test_mount_test.go:259: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-217220 /tmp/TestFunctionalparallelMountCmdspecific-port3340900299/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:260: reading mount text
functional_test_mount_test.go:274: done reading mount text
functional_test_mount_test.go:226: (dbg) Run:  out/minikube-linux-amd64 -p functional-217220 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:226: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-217220 ssh "sudo umount -f /mount-9p": exit status 1 (199.236235ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:228: "out/minikube-linux-amd64 -p functional-217220 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:230: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-217220 /tmp/TestFunctionalparallelMountCmdspecific-port3340900299/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.84s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.16s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:188: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:188: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-217220
--- PASS: TestFunctional/delete_addon-resizer_images (0.16s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:196: (dbg) Run:  docker rmi -f localhost/my-image:functional-217220
--- PASS: TestFunctional/delete_my-image_image (0.06s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:204: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-217220
--- PASS: TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                    
x
+
TestGvisorAddon (325.39s)

                                                
                                                
=== RUN   TestGvisorAddon
=== PAUSE TestGvisorAddon

                                                
                                                

                                                
                                                
=== CONT  TestGvisorAddon
gvisor_addon_test.go:52: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-836132 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:52: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-836132 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (1m1.35730865s)
gvisor_addon_test.go:58: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-836132 cache add gcr.io/k8s-minikube/gvisor-addon:2
E0331 18:01:55.053926   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/skaffold-531248/client.crt: no such file or directory
E0331 18:02:05.294686   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/skaffold-531248/client.crt: no such file or directory
gvisor_addon_test.go:58: (dbg) Done: out/minikube-linux-amd64 -p gvisor-836132 cache add gcr.io/k8s-minikube/gvisor-addon:2: (25.007869017s)
gvisor_addon_test.go:63: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-836132 addons enable gvisor
gvisor_addon_test.go:63: (dbg) Done: out/minikube-linux-amd64 -p gvisor-836132 addons enable gvisor: (4.777961442s)
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [43696a37-9d1f-4c71-9b80-2edfd733f4a0] Running
E0331 18:02:25.775507   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/skaffold-531248/client.crt: no such file or directory
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 5.035688458s
gvisor_addon_test.go:73: (dbg) Run:  kubectl --context gvisor-836132 replace --force -f testdata/nginx-untrusted.yaml
gvisor_addon_test.go:78: (dbg) Run:  kubectl --context gvisor-836132 replace --force -f testdata/nginx-gvisor.yaml
gvisor_addon_test.go:83: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,untrusted=true" in namespace "default" ...
helpers_test.go:344: "nginx-untrusted" [5422d325-1a97-47a4-b800-586113a92c7e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-untrusted" [5422d325-1a97-47a4-b800-586113a92c7e] Running
gvisor_addon_test.go:83: (dbg) TestGvisorAddon: run=nginx,untrusted=true healthy within 16.019052733s
gvisor_addon_test.go:86: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [821dbe8c-fcce-4401-afe7-d6a1d3db8ab9] Running
gvisor_addon_test.go:86: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 5.008777587s
gvisor_addon_test.go:91: (dbg) Run:  out/minikube-linux-amd64 stop -p gvisor-836132
E0331 18:03:06.735994   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/skaffold-531248/client.crt: no such file or directory
E0331 18:03:15.615795   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/ingress-addon-legacy-757983/client.crt: no such file or directory
E0331 18:03:18.151531   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/addons-104430/client.crt: no such file or directory
gvisor_addon_test.go:91: (dbg) Done: out/minikube-linux-amd64 stop -p gvisor-836132: (1m42.241480243s)
gvisor_addon_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-836132 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-836132 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (1m28.972243362s)
gvisor_addon_test.go:100: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [43696a37-9d1f-4c71-9b80-2edfd733f4a0] Running / Ready:ContainersNotReady (containers with unready status: [gvisor]) / ContainersReady:ContainersNotReady (containers with unready status: [gvisor])
gvisor_addon_test.go:100: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 5.099643183s
gvisor_addon_test.go:103: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,untrusted=true" in namespace "default" ...
helpers_test.go:344: "nginx-untrusted" [5422d325-1a97-47a4-b800-586113a92c7e] Running / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
gvisor_addon_test.go:103: (dbg) TestGvisorAddon: run=nginx,untrusted=true healthy within 5.06443502s
gvisor_addon_test.go:106: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [821dbe8c-fcce-4401-afe7-d6a1d3db8ab9] Running / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
gvisor_addon_test.go:106: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 5.008152537s
helpers_test.go:175: Cleaning up "gvisor-836132" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p gvisor-836132
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p gvisor-836132: (1.249936269s)
--- PASS: TestGvisorAddon (325.39s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (2.31s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:73: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-874530
E0331 17:31:01.998184   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/addons-104430/client.crt: no such file or directory
image_test.go:73: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-874530: (2.308243336s)
--- PASS: TestImageBuild/serial/NormalBuild (2.31s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.66s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:94: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-874530
image_test.go:94: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-874530: (1.656135156s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.66s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.5s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:128: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-874530
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.50s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.34s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:83: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-874530
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.34s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (110.94s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-757983 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2 
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-757983 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2 : (1m50.935419073s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (110.94s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (18.38s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-757983 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-757983 addons enable ingress --alsologtostderr -v=5: (18.376297299s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (18.38s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.5s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-757983 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.50s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (39.8s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:182: (dbg) Run:  kubectl --context ingress-addon-legacy-757983 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
E0331 17:33:18.151504   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/addons-104430/client.crt: no such file or directory
addons_test.go:182: (dbg) Done: kubectl --context ingress-addon-legacy-757983 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (9.965655508s)
addons_test.go:207: (dbg) Run:  kubectl --context ingress-addon-legacy-757983 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:220: (dbg) Run:  kubectl --context ingress-addon-legacy-757983 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:225: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [ee11152f-fa39-44b3-9d5d-b43e7806f7de] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [ee11152f-fa39-44b3-9d5d-b43e7806f7de] Running
addons_test.go:225: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 12.010916256s
addons_test.go:237: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-757983 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-757983 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:266: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-757983 ip
addons_test.go:272: (dbg) Run:  nslookup hello-john.test 192.168.39.219
addons_test.go:281: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-757983 addons disable ingress-dns --alsologtostderr -v=1
E0331 17:33:45.838495   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/addons-104430/client.crt: no such file or directory
addons_test.go:281: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-757983 addons disable ingress-dns --alsologtostderr -v=1: (9.266562258s)
addons_test.go:286: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-757983 addons disable ingress --alsologtostderr -v=1
addons_test.go:286: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-757983 addons disable ingress --alsologtostderr -v=1: (7.398282193s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (39.80s)

                                                
                                    
x
+
TestJSONOutput/start/Command (110.46s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-750415 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 
E0331 17:34:06.732589   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/functional-217220/client.crt: no such file or directory
E0331 17:34:06.737924   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/functional-217220/client.crt: no such file or directory
E0331 17:34:06.748191   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/functional-217220/client.crt: no such file or directory
E0331 17:34:06.768462   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/functional-217220/client.crt: no such file or directory
E0331 17:34:06.808775   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/functional-217220/client.crt: no such file or directory
E0331 17:34:06.889174   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/functional-217220/client.crt: no such file or directory
E0331 17:34:07.049680   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/functional-217220/client.crt: no such file or directory
E0331 17:34:07.370276   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/functional-217220/client.crt: no such file or directory
E0331 17:34:08.011207   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/functional-217220/client.crt: no such file or directory
E0331 17:34:09.292127   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/functional-217220/client.crt: no such file or directory
E0331 17:34:11.853145   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/functional-217220/client.crt: no such file or directory
E0331 17:34:16.973814   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/functional-217220/client.crt: no such file or directory
E0331 17:34:27.214674   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/functional-217220/client.crt: no such file or directory
E0331 17:34:47.695527   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/functional-217220/client.crt: no such file or directory
E0331 17:35:28.656530   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/functional-217220/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-750415 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 : (1m50.457842025s)
--- PASS: TestJSONOutput/start/Command (110.46s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-750415 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.57s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-750415 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.57s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (13.1s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-750415 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-750415 --output=json --user=testUser: (13.098245093s)
--- PASS: TestJSONOutput/stop/Command (13.10s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.43s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-335911 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-335911 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (62.873851ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4b5fd109-5951-45e3-9970-c3310cbd0224","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-335911] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f4d074ff-7a0e-42a7-ae1c-11c1e23ba0cb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=16144"}}
	{"specversion":"1.0","id":"a8a40430-3fb0-4563-80cd-8bf0dfd693c3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"bc18b1a9-1cbe-4cd3-a1b0-a57e9c534f08","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/16144-3494/kubeconfig"}}
	{"specversion":"1.0","id":"7b325ff1-5312-4b22-acbf-d0534a56040c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/16144-3494/.minikube"}}
	{"specversion":"1.0","id":"7497bc25-f8fa-4148-a752-3710c7b6e097","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"ed8c6f87-abb6-4144-9c63-538b409bde14","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"3d3d35f7-0688-4ae6-bedd-5eb5b084feb3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-335911" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-335911
--- PASS: TestErrorJSONOutput (0.43s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (105.05s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-997410 --driver=kvm2 
E0331 17:36:50.577029   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/functional-217220/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-997410 --driver=kvm2 : (51.900189286s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-999588 --driver=kvm2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-999588 --driver=kvm2 : (50.084748562s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-997410
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-999588
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-999588" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-999588
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-999588: (1.006283863s)
helpers_test.go:175: Cleaning up "first-997410" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-997410
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-997410: (1.007617897s)
--- PASS: TestMinikubeProfile (105.05s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (33.01s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-738397 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 
E0331 17:38:15.615972   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/ingress-addon-legacy-757983/client.crt: no such file or directory
E0331 17:38:15.621245   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/ingress-addon-legacy-757983/client.crt: no such file or directory
E0331 17:38:15.631534   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/ingress-addon-legacy-757983/client.crt: no such file or directory
E0331 17:38:15.651838   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/ingress-addon-legacy-757983/client.crt: no such file or directory
E0331 17:38:15.692214   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/ingress-addon-legacy-757983/client.crt: no such file or directory
E0331 17:38:15.772490   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/ingress-addon-legacy-757983/client.crt: no such file or directory
E0331 17:38:15.932917   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/ingress-addon-legacy-757983/client.crt: no such file or directory
E0331 17:38:16.253548   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/ingress-addon-legacy-757983/client.crt: no such file or directory
E0331 17:38:16.894584   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/ingress-addon-legacy-757983/client.crt: no such file or directory
E0331 17:38:18.151970   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/addons-104430/client.crt: no such file or directory
E0331 17:38:18.175193   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/ingress-addon-legacy-757983/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-738397 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 : (32.01132762s)
--- PASS: TestMountStart/serial/StartWithMountFirst (33.01s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.49s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-738397 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-738397 ssh -- mount | grep 9p
E0331 17:38:20.735784   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/ingress-addon-legacy-757983/client.crt: no such file or directory
--- PASS: TestMountStart/serial/VerifyMountFirst (0.49s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (28.18s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-751238 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 
E0331 17:38:25.856155   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/ingress-addon-legacy-757983/client.crt: no such file or directory
E0331 17:38:36.096992   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/ingress-addon-legacy-757983/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-751238 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 : (27.175716859s)
--- PASS: TestMountStart/serial/StartWithMountSecond (28.18s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-751238 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-751238 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.71s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-738397 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-751238 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-751238 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.32s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-751238
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-751238: (2.324098684s)
--- PASS: TestMountStart/serial/Stop (2.32s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.29s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-751238
E0331 17:38:56.577250   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/ingress-addon-legacy-757983/client.crt: no such file or directory
E0331 17:39:06.731570   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/functional-217220/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-751238: (22.287160445s)
--- PASS: TestMountStart/serial/RestartStopped (23.29s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-751238 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-751238 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (167.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-930065 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 
E0331 17:39:34.419121   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/functional-217220/client.crt: no such file or directory
E0331 17:39:37.538222   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/ingress-addon-legacy-757983/client.crt: no such file or directory
E0331 17:40:59.458773   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/ingress-addon-legacy-757983/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p multinode-930065 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 : (2m46.827139575s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930065 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (167.25s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-930065 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-930065 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-930065 -- rollout status deployment/busybox: (3.098133255s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-930065 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-930065 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-930065 -- exec busybox-6b86dd6d48-6zxvb -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-930065 -- exec busybox-6b86dd6d48-cbl75 -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-930065 -- exec busybox-6b86dd6d48-6zxvb -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-930065 -- exec busybox-6b86dd6d48-cbl75 -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-930065 -- exec busybox-6b86dd6d48-6zxvb -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-930065 -- exec busybox-6b86dd6d48-cbl75 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.99s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-930065 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-930065 -- exec busybox-6b86dd6d48-6zxvb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-930065 -- exec busybox-6b86dd6d48-6zxvb -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-930065 -- exec busybox-6b86dd6d48-cbl75 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-930065 -- exec busybox-6b86dd6d48-cbl75 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.93s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (54.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-930065 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-930065 -v 3 --alsologtostderr: (54.168777133s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930065 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (54.75s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930065 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930065 cp testdata/cp-test.txt multinode-930065:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930065 ssh -n multinode-930065 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930065 cp multinode-930065:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile104215586/001/cp-test_multinode-930065.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930065 ssh -n multinode-930065 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930065 cp multinode-930065:/home/docker/cp-test.txt multinode-930065-m02:/home/docker/cp-test_multinode-930065_multinode-930065-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930065 ssh -n multinode-930065 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930065 ssh -n multinode-930065-m02 "sudo cat /home/docker/cp-test_multinode-930065_multinode-930065-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930065 cp multinode-930065:/home/docker/cp-test.txt multinode-930065-m03:/home/docker/cp-test_multinode-930065_multinode-930065-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930065 ssh -n multinode-930065 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930065 ssh -n multinode-930065-m03 "sudo cat /home/docker/cp-test_multinode-930065_multinode-930065-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930065 cp testdata/cp-test.txt multinode-930065-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930065 ssh -n multinode-930065-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930065 cp multinode-930065-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile104215586/001/cp-test_multinode-930065-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930065 ssh -n multinode-930065-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930065 cp multinode-930065-m02:/home/docker/cp-test.txt multinode-930065:/home/docker/cp-test_multinode-930065-m02_multinode-930065.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930065 ssh -n multinode-930065-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930065 ssh -n multinode-930065 "sudo cat /home/docker/cp-test_multinode-930065-m02_multinode-930065.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930065 cp multinode-930065-m02:/home/docker/cp-test.txt multinode-930065-m03:/home/docker/cp-test_multinode-930065-m02_multinode-930065-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930065 ssh -n multinode-930065-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930065 ssh -n multinode-930065-m03 "sudo cat /home/docker/cp-test_multinode-930065-m02_multinode-930065-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930065 cp testdata/cp-test.txt multinode-930065-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930065 ssh -n multinode-930065-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930065 cp multinode-930065-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile104215586/001/cp-test_multinode-930065-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930065 ssh -n multinode-930065-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930065 cp multinode-930065-m03:/home/docker/cp-test.txt multinode-930065:/home/docker/cp-test_multinode-930065-m03_multinode-930065.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930065 ssh -n multinode-930065-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930065 ssh -n multinode-930065 "sudo cat /home/docker/cp-test_multinode-930065-m03_multinode-930065.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930065 cp multinode-930065-m03:/home/docker/cp-test.txt multinode-930065-m02:/home/docker/cp-test_multinode-930065-m03_multinode-930065-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930065 ssh -n multinode-930065-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930065 ssh -n multinode-930065-m02 "sudo cat /home/docker/cp-test_multinode-930065-m03_multinode-930065-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.38s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930065 node stop m03
E0331 17:43:15.615027   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/ingress-addon-legacy-757983/client.crt: no such file or directory
multinode_test.go:210: (dbg) Done: out/minikube-linux-amd64 -p multinode-930065 node stop m03: (3.080177589s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930065 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-930065 status: exit status 7 (432.67853ms)

                                                
                                                
-- stdout --
	multinode-930065
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-930065-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-930065-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930065 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-930065 status --alsologtostderr: exit status 7 (444.788505ms)

                                                
                                                
-- stdout --
	multinode-930065
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-930065-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-930065-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0331 17:43:16.621206   23940 out.go:296] Setting OutFile to fd 1 ...
	I0331 17:43:16.621333   23940 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0331 17:43:16.621339   23940 out.go:309] Setting ErrFile to fd 2...
	I0331 17:43:16.621346   23940 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0331 17:43:16.621478   23940 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16144-3494/.minikube/bin
	I0331 17:43:16.621660   23940 out.go:303] Setting JSON to false
	I0331 17:43:16.621688   23940 mustload.go:65] Loading cluster: multinode-930065
	I0331 17:43:16.621798   23940 notify.go:220] Checking for updates...
	I0331 17:43:16.622152   23940 config.go:182] Loaded profile config "multinode-930065": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.3
	I0331 17:43:16.622168   23940 status.go:255] checking status of multinode-930065 ...
	I0331 17:43:16.622525   23940 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0331 17:43:16.622595   23940 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0331 17:43:16.638051   23940 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43195
	I0331 17:43:16.638455   23940 main.go:141] libmachine: () Calling .GetVersion
	I0331 17:43:16.638984   23940 main.go:141] libmachine: Using API Version  1
	I0331 17:43:16.639007   23940 main.go:141] libmachine: () Calling .SetConfigRaw
	I0331 17:43:16.639351   23940 main.go:141] libmachine: () Calling .GetMachineName
	I0331 17:43:16.639575   23940 main.go:141] libmachine: (multinode-930065) Calling .GetState
	I0331 17:43:16.641276   23940 status.go:330] multinode-930065 host status = "Running" (err=<nil>)
	I0331 17:43:16.641297   23940 host.go:66] Checking if "multinode-930065" exists ...
	I0331 17:43:16.641679   23940 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0331 17:43:16.641728   23940 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0331 17:43:16.656472   23940 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46675
	I0331 17:43:16.656959   23940 main.go:141] libmachine: () Calling .GetVersion
	I0331 17:43:16.657400   23940 main.go:141] libmachine: Using API Version  1
	I0331 17:43:16.657426   23940 main.go:141] libmachine: () Calling .SetConfigRaw
	I0331 17:43:16.657741   23940 main.go:141] libmachine: () Calling .GetMachineName
	I0331 17:43:16.657899   23940 main.go:141] libmachine: (multinode-930065) Calling .GetIP
	I0331 17:43:16.660991   23940 main.go:141] libmachine: (multinode-930065) DBG | domain multinode-930065 has defined MAC address 52:54:00:0b:49:1f in network mk-multinode-930065
	I0331 17:43:16.661406   23940 main.go:141] libmachine: (multinode-930065) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:49:1f", ip: ""} in network mk-multinode-930065: {Iface:virbr1 ExpiryTime:2023-03-31 18:39:32 +0000 UTC Type:0 Mac:52:54:00:0b:49:1f Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:multinode-930065 Clientid:01:52:54:00:0b:49:1f}
	I0331 17:43:16.661441   23940 main.go:141] libmachine: (multinode-930065) DBG | domain multinode-930065 has defined IP address 192.168.39.220 and MAC address 52:54:00:0b:49:1f in network mk-multinode-930065
	I0331 17:43:16.661580   23940 host.go:66] Checking if "multinode-930065" exists ...
	I0331 17:43:16.661894   23940 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0331 17:43:16.661929   23940 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0331 17:43:16.676055   23940 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34529
	I0331 17:43:16.676429   23940 main.go:141] libmachine: () Calling .GetVersion
	I0331 17:43:16.676856   23940 main.go:141] libmachine: Using API Version  1
	I0331 17:43:16.676878   23940 main.go:141] libmachine: () Calling .SetConfigRaw
	I0331 17:43:16.677137   23940 main.go:141] libmachine: () Calling .GetMachineName
	I0331 17:43:16.677289   23940 main.go:141] libmachine: (multinode-930065) Calling .DriverName
	I0331 17:43:16.677422   23940 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0331 17:43:16.677453   23940 main.go:141] libmachine: (multinode-930065) Calling .GetSSHHostname
	I0331 17:43:16.680096   23940 main.go:141] libmachine: (multinode-930065) DBG | domain multinode-930065 has defined MAC address 52:54:00:0b:49:1f in network mk-multinode-930065
	I0331 17:43:16.680433   23940 main.go:141] libmachine: (multinode-930065) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:49:1f", ip: ""} in network mk-multinode-930065: {Iface:virbr1 ExpiryTime:2023-03-31 18:39:32 +0000 UTC Type:0 Mac:52:54:00:0b:49:1f Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:multinode-930065 Clientid:01:52:54:00:0b:49:1f}
	I0331 17:43:16.680468   23940 main.go:141] libmachine: (multinode-930065) DBG | domain multinode-930065 has defined IP address 192.168.39.220 and MAC address 52:54:00:0b:49:1f in network mk-multinode-930065
	I0331 17:43:16.680630   23940 main.go:141] libmachine: (multinode-930065) Calling .GetSSHPort
	I0331 17:43:16.680767   23940 main.go:141] libmachine: (multinode-930065) Calling .GetSSHKeyPath
	I0331 17:43:16.680889   23940 main.go:141] libmachine: (multinode-930065) Calling .GetSSHUsername
	I0331 17:43:16.680980   23940 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16144-3494/.minikube/machines/multinode-930065/id_rsa Username:docker}
	I0331 17:43:16.780010   23940 ssh_runner.go:195] Run: systemctl --version
	I0331 17:43:16.785869   23940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0331 17:43:16.799945   23940 kubeconfig.go:92] found "multinode-930065" server: "https://192.168.39.220:8443"
	I0331 17:43:16.799973   23940 api_server.go:165] Checking apiserver status ...
	I0331 17:43:16.800004   23940 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 17:43:16.813168   23940 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1813/cgroup
	I0331 17:43:16.823684   23940 api_server.go:181] apiserver freezer: "2:freezer:/kubepods/burstable/poda5e9222cc5c31c18d1d5a649bf0e9f85/47dbbaeb8f59623309197c99efe43125f2a81f44ac031f98b1432a73510a10d3"
	I0331 17:43:16.823779   23940 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda5e9222cc5c31c18d1d5a649bf0e9f85/47dbbaeb8f59623309197c99efe43125f2a81f44ac031f98b1432a73510a10d3/freezer.state
	I0331 17:43:16.832890   23940 api_server.go:203] freezer state: "THAWED"
	I0331 17:43:16.832928   23940 api_server.go:252] Checking apiserver healthz at https://192.168.39.220:8443/healthz ...
	I0331 17:43:16.839373   23940 api_server.go:278] https://192.168.39.220:8443/healthz returned 200:
	ok
	I0331 17:43:16.839394   23940 status.go:421] multinode-930065 apiserver status = Running (err=<nil>)
	I0331 17:43:16.839403   23940 status.go:257] multinode-930065 status: &{Name:multinode-930065 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0331 17:43:16.839421   23940 status.go:255] checking status of multinode-930065-m02 ...
	I0331 17:43:16.839734   23940 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0331 17:43:16.839776   23940 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0331 17:43:16.854049   23940 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33347
	I0331 17:43:16.854421   23940 main.go:141] libmachine: () Calling .GetVersion
	I0331 17:43:16.854899   23940 main.go:141] libmachine: Using API Version  1
	I0331 17:43:16.854921   23940 main.go:141] libmachine: () Calling .SetConfigRaw
	I0331 17:43:16.855234   23940 main.go:141] libmachine: () Calling .GetMachineName
	I0331 17:43:16.855415   23940 main.go:141] libmachine: (multinode-930065-m02) Calling .GetState
	I0331 17:43:16.856866   23940 status.go:330] multinode-930065-m02 host status = "Running" (err=<nil>)
	I0331 17:43:16.856890   23940 host.go:66] Checking if "multinode-930065-m02" exists ...
	I0331 17:43:16.857178   23940 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0331 17:43:16.857207   23940 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0331 17:43:16.871063   23940 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40291
	I0331 17:43:16.871440   23940 main.go:141] libmachine: () Calling .GetVersion
	I0331 17:43:16.871917   23940 main.go:141] libmachine: Using API Version  1
	I0331 17:43:16.871939   23940 main.go:141] libmachine: () Calling .SetConfigRaw
	I0331 17:43:16.872231   23940 main.go:141] libmachine: () Calling .GetMachineName
	I0331 17:43:16.872435   23940 main.go:141] libmachine: (multinode-930065-m02) Calling .GetIP
	I0331 17:43:16.874973   23940 main.go:141] libmachine: (multinode-930065-m02) DBG | domain multinode-930065-m02 has defined MAC address 52:54:00:08:14:e3 in network mk-multinode-930065
	I0331 17:43:16.875489   23940 main.go:141] libmachine: (multinode-930065-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:14:e3", ip: ""} in network mk-multinode-930065: {Iface:virbr1 ExpiryTime:2023-03-31 18:40:47 +0000 UTC Type:0 Mac:52:54:00:08:14:e3 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:multinode-930065-m02 Clientid:01:52:54:00:08:14:e3}
	I0331 17:43:16.875520   23940 main.go:141] libmachine: (multinode-930065-m02) DBG | domain multinode-930065-m02 has defined IP address 192.168.39.13 and MAC address 52:54:00:08:14:e3 in network mk-multinode-930065
	I0331 17:43:16.875672   23940 host.go:66] Checking if "multinode-930065-m02" exists ...
	I0331 17:43:16.875961   23940 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0331 17:43:16.875984   23940 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0331 17:43:16.890088   23940 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41137
	I0331 17:43:16.890516   23940 main.go:141] libmachine: () Calling .GetVersion
	I0331 17:43:16.891141   23940 main.go:141] libmachine: Using API Version  1
	I0331 17:43:16.891167   23940 main.go:141] libmachine: () Calling .SetConfigRaw
	I0331 17:43:16.891475   23940 main.go:141] libmachine: () Calling .GetMachineName
	I0331 17:43:16.891650   23940 main.go:141] libmachine: (multinode-930065-m02) Calling .DriverName
	I0331 17:43:16.891847   23940 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0331 17:43:16.891874   23940 main.go:141] libmachine: (multinode-930065-m02) Calling .GetSSHHostname
	I0331 17:43:16.894866   23940 main.go:141] libmachine: (multinode-930065-m02) DBG | domain multinode-930065-m02 has defined MAC address 52:54:00:08:14:e3 in network mk-multinode-930065
	I0331 17:43:16.895309   23940 main.go:141] libmachine: (multinode-930065-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:14:e3", ip: ""} in network mk-multinode-930065: {Iface:virbr1 ExpiryTime:2023-03-31 18:40:47 +0000 UTC Type:0 Mac:52:54:00:08:14:e3 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:multinode-930065-m02 Clientid:01:52:54:00:08:14:e3}
	I0331 17:43:16.895363   23940 main.go:141] libmachine: (multinode-930065-m02) DBG | domain multinode-930065-m02 has defined IP address 192.168.39.13 and MAC address 52:54:00:08:14:e3 in network mk-multinode-930065
	I0331 17:43:16.895571   23940 main.go:141] libmachine: (multinode-930065-m02) Calling .GetSSHPort
	I0331 17:43:16.895775   23940 main.go:141] libmachine: (multinode-930065-m02) Calling .GetSSHKeyPath
	I0331 17:43:16.895987   23940 main.go:141] libmachine: (multinode-930065-m02) Calling .GetSSHUsername
	I0331 17:43:16.896134   23940 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16144-3494/.minikube/machines/multinode-930065-m02/id_rsa Username:docker}
	I0331 17:43:16.991843   23940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0331 17:43:17.005653   23940 status.go:257] multinode-930065-m02 status: &{Name:multinode-930065-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0331 17:43:17.005699   23940 status.go:255] checking status of multinode-930065-m03 ...
	I0331 17:43:17.006144   23940 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0331 17:43:17.006178   23940 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0331 17:43:17.021736   23940 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37667
	I0331 17:43:17.022138   23940 main.go:141] libmachine: () Calling .GetVersion
	I0331 17:43:17.022634   23940 main.go:141] libmachine: Using API Version  1
	I0331 17:43:17.022660   23940 main.go:141] libmachine: () Calling .SetConfigRaw
	I0331 17:43:17.023001   23940 main.go:141] libmachine: () Calling .GetMachineName
	I0331 17:43:17.023198   23940 main.go:141] libmachine: (multinode-930065-m03) Calling .GetState
	I0331 17:43:17.024808   23940 status.go:330] multinode-930065-m03 host status = "Stopped" (err=<nil>)
	I0331 17:43:17.024825   23940 status.go:343] host is not running, skipping remaining checks
	I0331 17:43:17.024833   23940 status.go:257] multinode-930065-m03 status: &{Name:multinode-930065-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.96s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (32.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930065 node start m03 --alsologtostderr
E0331 17:43:18.151467   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/addons-104430/client.crt: no such file or directory
E0331 17:43:43.299960   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/ingress-addon-legacy-757983/client.crt: no such file or directory
multinode_test.go:254: (dbg) Done: out/minikube-linux-amd64 -p multinode-930065 node start m03 --alsologtostderr: (31.966189504s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930065 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (32.61s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (178.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-930065
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-930065
E0331 17:44:06.731871   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/functional-217220/client.crt: no such file or directory
multinode_test.go:290: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-930065: (29.181300485s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-930065 --wait=true -v=8 --alsologtostderr
E0331 17:44:41.199733   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/addons-104430/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 start -p multinode-930065 --wait=true -v=8 --alsologtostderr: (2m29.588292157s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-930065
--- PASS: TestMultiNode/serial/RestartKeepsNodes (178.85s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930065 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-amd64 -p multinode-930065 node delete m03: (1.290355287s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930065 status --alsologtostderr
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.84s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (26.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930065 stop
multinode_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p multinode-930065 stop: (26.142289888s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930065 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-930065 status: exit status 7 (74.407787ms)

                                                
                                                
-- stdout --
	multinode-930065
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-930065-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930065 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-930065 status --alsologtostderr: exit status 7 (73.300542ms)

                                                
                                                
-- stdout --
	multinode-930065
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-930065-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0331 17:47:16.589191   24863 out.go:296] Setting OutFile to fd 1 ...
	I0331 17:47:16.589330   24863 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0331 17:47:16.589339   24863 out.go:309] Setting ErrFile to fd 2...
	I0331 17:47:16.589344   24863 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0331 17:47:16.589464   24863 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16144-3494/.minikube/bin
	I0331 17:47:16.589683   24863 out.go:303] Setting JSON to false
	I0331 17:47:16.589710   24863 mustload.go:65] Loading cluster: multinode-930065
	I0331 17:47:16.589818   24863 notify.go:220] Checking for updates...
	I0331 17:47:16.590145   24863 config.go:182] Loaded profile config "multinode-930065": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.3
	I0331 17:47:16.590165   24863 status.go:255] checking status of multinode-930065 ...
	I0331 17:47:16.590569   24863 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0331 17:47:16.590640   24863 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0331 17:47:16.605556   24863 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37979
	I0331 17:47:16.605956   24863 main.go:141] libmachine: () Calling .GetVersion
	I0331 17:47:16.606456   24863 main.go:141] libmachine: Using API Version  1
	I0331 17:47:16.606477   24863 main.go:141] libmachine: () Calling .SetConfigRaw
	I0331 17:47:16.606830   24863 main.go:141] libmachine: () Calling .GetMachineName
	I0331 17:47:16.607019   24863 main.go:141] libmachine: (multinode-930065) Calling .GetState
	I0331 17:47:16.608659   24863 status.go:330] multinode-930065 host status = "Stopped" (err=<nil>)
	I0331 17:47:16.608675   24863 status.go:343] host is not running, skipping remaining checks
	I0331 17:47:16.608681   24863 status.go:257] multinode-930065 status: &{Name:multinode-930065 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0331 17:47:16.608700   24863 status.go:255] checking status of multinode-930065-m02 ...
	I0331 17:47:16.608971   24863 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0331 17:47:16.609005   24863 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0331 17:47:16.622945   24863 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37795
	I0331 17:47:16.623378   24863 main.go:141] libmachine: () Calling .GetVersion
	I0331 17:47:16.623798   24863 main.go:141] libmachine: Using API Version  1
	I0331 17:47:16.623818   24863 main.go:141] libmachine: () Calling .SetConfigRaw
	I0331 17:47:16.624055   24863 main.go:141] libmachine: () Calling .GetMachineName
	I0331 17:47:16.624229   24863 main.go:141] libmachine: (multinode-930065-m02) Calling .GetState
	I0331 17:47:16.625474   24863 status.go:330] multinode-930065-m02 host status = "Stopped" (err=<nil>)
	I0331 17:47:16.625486   24863 status.go:343] host is not running, skipping remaining checks
	I0331 17:47:16.625491   24863 status.go:257] multinode-930065-m02 status: &{Name:multinode-930065-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (26.29s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (104.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:354: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-930065 --wait=true -v=8 --alsologtostderr --driver=kvm2 
E0331 17:48:15.614975   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/ingress-addon-legacy-757983/client.crt: no such file or directory
E0331 17:48:18.151803   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/addons-104430/client.crt: no such file or directory
multinode_test.go:354: (dbg) Done: out/minikube-linux-amd64 start -p multinode-930065 --wait=true -v=8 --alsologtostderr --driver=kvm2 : (1m44.366026668s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930065 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (104.90s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (54.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-930065
multinode_test.go:452: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-930065-m02 --driver=kvm2 
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-930065-m02 --driver=kvm2 : exit status 14 (60.600387ms)

                                                
                                                
-- stdout --
	* [multinode-930065-m02] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16144
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16144-3494/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16144-3494/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-930065-m02' is duplicated with machine name 'multinode-930065-m02' in profile 'multinode-930065'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-930065-m03 --driver=kvm2 
E0331 17:49:06.732571   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/functional-217220/client.crt: no such file or directory
multinode_test.go:460: (dbg) Done: out/minikube-linux-amd64 start -p multinode-930065-m03 --driver=kvm2 : (53.590226636s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-930065
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-930065: exit status 80 (210.031409ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-930065
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-930065-m03 already exists in multinode-930065-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-930065-m03
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-930065-m03: (1.023711837s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (54.92s)

                                                
                                    
x
+
TestPreload (205.46s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-479850 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4
E0331 17:50:29.779855   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/functional-217220/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-479850 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4: (2m2.71402784s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-479850 -- docker pull gcr.io/k8s-minikube/busybox
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 ssh -p test-preload-479850 -- docker pull gcr.io/k8s-minikube/busybox: (1.356362308s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-479850
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-479850: (13.09428596s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-479850 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 
E0331 17:53:15.614998   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/ingress-addon-legacy-757983/client.crt: no such file or directory
E0331 17:53:18.151431   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/addons-104430/client.crt: no such file or directory
preload_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-479850 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 : (1m6.903557225s)
preload_test.go:80: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-479850 -- docker images
helpers_test.go:175: Cleaning up "test-preload-479850" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-479850
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-479850: (1.156509407s)
--- PASS: TestPreload (205.46s)

                                                
                                    
x
+
TestScheduledStopUnix (125.32s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-453246 --memory=2048 --driver=kvm2 
E0331 17:54:06.731771   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/functional-217220/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-453246 --memory=2048 --driver=kvm2 : (53.732499659s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-453246 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-453246 -n scheduled-stop-453246
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-453246 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-453246 --cancel-scheduled
E0331 17:54:38.660185   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/ingress-addon-legacy-757983/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-453246 -n scheduled-stop-453246
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-453246
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-453246 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-453246
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-453246: exit status 7 (56.270335ms)

                                                
                                                
-- stdout --
	scheduled-stop-453246
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-453246 -n scheduled-stop-453246
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-453246 -n scheduled-stop-453246: exit status 7 (57.107434ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-453246" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-453246
--- PASS: TestScheduledStopUnix (125.32s)

                                                
                                    
x
+
TestSkaffold (86.84s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe2740209582 version
skaffold_test.go:63: skaffold version: v2.3.0
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-531248 --memory=2600 --driver=kvm2 
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-531248 --memory=2600 --driver=kvm2 : (51.8867721s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/KVM_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe2740209582 run --minikube-profile skaffold-531248 --kube-context skaffold-531248 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe2740209582 run --minikube-profile skaffold-531248 --kube-context skaffold-531248 --status-check=true --port-forward=false --interactive=false: (23.049843991s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-5df996b54d-dvrxn" [d490c0e0-aa07-4d64-810b-c36226dcc8f3] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 5.016394709s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-dc495df74-nxf9q" [86628fe9-5dd5-44d4-9288-d5a20ee69b1b] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.007334054s
helpers_test.go:175: Cleaning up "skaffold-531248" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-531248
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-531248: (1.067452158s)
--- PASS: TestSkaffold (86.84s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (154.09s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
E0331 17:58:15.615001   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/ingress-addon-legacy-757983/client.crt: no such file or directory
version_upgrade_test.go:128: (dbg) Run:  /tmp/minikube-v1.6.2.2581319197.exe start -p running-upgrade-731453 --memory=2200 --vm-driver=kvm2 
E0331 17:58:18.151456   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/addons-104430/client.crt: no such file or directory
version_upgrade_test.go:128: (dbg) Done: /tmp/minikube-v1.6.2.2581319197.exe start -p running-upgrade-731453 --memory=2200 --vm-driver=kvm2 : (1m50.235492177s)
version_upgrade_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-731453 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-731453 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (42.186780602s)
helpers_test.go:175: Cleaning up "running-upgrade-731453" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-731453
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-731453: (1.334821578s)
--- PASS: TestRunningBinaryUpgrade (154.09s)

                                                
                                    
x
+
TestKubernetesUpgrade (228.09s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:230: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-075589 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2 
E0331 17:59:06.731869   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/functional-217220/client.crt: no such file or directory
version_upgrade_test.go:230: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-075589 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2 : (1m19.111521384s)
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-075589
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-075589: (13.148899644s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-075589 status --format={{.Host}}
version_upgrade_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-075589 status --format={{.Host}}: exit status 7 (83.930323ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:242: status error: exit status 7 (may be ok)
version_upgrade_test.go:251: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-075589 --memory=2200 --kubernetes-version=v1.27.0-rc.0 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:251: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-075589 --memory=2200 --kubernetes-version=v1.27.0-rc.0 --alsologtostderr -v=1 --driver=kvm2 : (48.134575624s)
version_upgrade_test.go:256: (dbg) Run:  kubectl --context kubernetes-upgrade-075589 version --output=json
version_upgrade_test.go:275: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:277: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-075589 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2 
version_upgrade_test.go:277: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-075589 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2 : exit status 106 (84.263252ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-075589] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16144
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16144-3494/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16144-3494/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.27.0-rc.0 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-075589
	    minikube start -p kubernetes-upgrade-075589 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0755892 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.27.0-rc.0, by running:
	    
	    minikube start -p kubernetes-upgrade-075589 --kubernetes-version=v1.27.0-rc.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:281: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:283: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-075589 --memory=2200 --kubernetes-version=v1.27.0-rc.0 --alsologtostderr -v=1 --driver=kvm2 
E0331 18:01:21.199917   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/addons-104430/client.crt: no such file or directory
version_upgrade_test.go:283: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-075589 --memory=2200 --kubernetes-version=v1.27.0-rc.0 --alsologtostderr -v=1 --driver=kvm2 : (1m26.129395106s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-075589" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-075589
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-075589: (1.32622661s)
--- PASS: TestKubernetesUpgrade (228.09s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.27s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (209.58s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:191: (dbg) Run:  /tmp/minikube-v1.6.2.2147666689.exe start -p stopped-upgrade-202435 --memory=2200 --vm-driver=kvm2 
version_upgrade_test.go:191: (dbg) Done: /tmp/minikube-v1.6.2.2147666689.exe start -p stopped-upgrade-202435 --memory=2200 --vm-driver=kvm2 : (1m32.359066536s)
version_upgrade_test.go:200: (dbg) Run:  /tmp/minikube-v1.6.2.2147666689.exe -p stopped-upgrade-202435 stop
version_upgrade_test.go:200: (dbg) Done: /tmp/minikube-v1.6.2.2147666689.exe -p stopped-upgrade-202435 stop: (13.102049761s)
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-202435 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-202435 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (1m44.116945854s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (209.58s)

                                                
                                    
x
+
TestPause/serial/Start (112.79s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-939189 --memory=2048 --install-addons=false --wait=all --driver=kvm2 
E0331 18:01:44.812611   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/skaffold-531248/client.crt: no such file or directory
E0331 18:01:44.817947   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/skaffold-531248/client.crt: no such file or directory
E0331 18:01:44.828274   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/skaffold-531248/client.crt: no such file or directory
E0331 18:01:44.848562   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/skaffold-531248/client.crt: no such file or directory
E0331 18:01:44.888914   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/skaffold-531248/client.crt: no such file or directory
E0331 18:01:44.969300   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/skaffold-531248/client.crt: no such file or directory
E0331 18:01:45.129713   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/skaffold-531248/client.crt: no such file or directory
E0331 18:01:45.450351   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/skaffold-531248/client.crt: no such file or directory
E0331 18:01:46.091366   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/skaffold-531248/client.crt: no such file or directory
E0331 18:01:47.372322   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/skaffold-531248/client.crt: no such file or directory
E0331 18:01:49.933441   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/skaffold-531248/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-939189 --memory=2048 --install-addons=false --wait=all --driver=kvm2 : (1m52.785622846s)
--- PASS: TestPause/serial/Start (112.79s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.05s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:214: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-202435
version_upgrade_test.go:214: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-202435: (2.052939677s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-746317 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-746317 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 : exit status 14 (68.20916ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-746317] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16144
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16144-3494/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16144-3494/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (73.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p auto-347180 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 
E0331 18:04:28.656230   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/skaffold-531248/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p auto-347180 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 : (1m13.914890287s)
--- PASS: TestNetworkPlugins/group/auto/Start (73.91s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (43.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-746317 --no-kubernetes --driver=kvm2 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-746317 --no-kubernetes --driver=kvm2 : (43.75309988s)
--- PASS: TestNoKubernetes/serial/Start (43.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (112.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-347180 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-347180 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 : (1m52.440721357s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (112.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-347180 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context auto-347180 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-x4s7z" [bc26dd1d-20da-49a8-a001-a5b91e0888ce] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-x4s7z" [bc26dd1d-20da-49a8-a001-a5b91e0888ce] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.00789209s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-746317 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-746317 "sudo systemctl is-active --quiet service kubelet": exit status 1 (227.43254ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (19.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (16.160929645s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (3.719983435s)
--- PASS: TestNoKubernetes/serial/ProfileList (19.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:174: (dbg) Run:  kubectl --context auto-347180 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:193: (dbg) Run:  kubectl --context auto-347180 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:248: (dbg) Run:  kubectl --context auto-347180 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-746317
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-746317: (2.273911469s)
--- PASS: TestNoKubernetes/serial/Stop (2.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (26.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-746317 --driver=kvm2 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-746317 --driver=kvm2 : (26.085181656s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (26.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (129.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p calico-347180 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p calico-347180 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 : (2m9.736258472s)
--- PASS: TestNetworkPlugins/group/calico/Start (129.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (120.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-347180 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-347180 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 : (2m0.161207402s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (120.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-746317 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-746317 "sudo systemctl is-active --quiet service kubelet": exit status 1 (212.903698ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (133.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p false-347180 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 
E0331 18:06:44.811772   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/skaffold-531248/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p false-347180 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 : (2m13.503517156s)
--- PASS: TestNetworkPlugins/group/false/Start (133.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-4txhb" [003a0844-0844-4183-b1db-1710016a1860] Running
net_test.go:119: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.018572936s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-347180 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context kindnet-347180 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-4jd65" [e721d53e-ae5a-4d08-afc9-6135bc0a6cd4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-4jd65" [e721d53e-ae5a-4d08-afc9-6135bc0a6cd4] Running
E0331 18:07:09.780711   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/functional-217220/client.crt: no such file or directory
net_test.go:162: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.011913324s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:174: (dbg) Run:  kubectl --context kindnet-347180 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:193: (dbg) Run:  kubectl --context kindnet-347180 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E0331 18:07:12.496997   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/skaffold-531248/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:248: (dbg) Run:  kubectl --context kindnet-347180 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (94.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-347180 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 
E0331 18:07:34.491411   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/gvisor-836132/client.crt: no such file or directory
E0331 18:07:44.732388   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/gvisor-836132/client.crt: no such file or directory
E0331 18:08:05.212634   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/gvisor-836132/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-347180 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 : (1m34.790380529s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (94.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-f76ds" [bbeed6f1-6606-4989-a38b-c8a095725cc3] Running
E0331 18:08:15.615645   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/ingress-addon-legacy-757983/client.crt: no such file or directory
net_test.go:119: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.034098284s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-347180 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (15.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context calico-347180 replace --force -f testdata/netcat-deployment.yaml
E0331 18:08:18.150837   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/addons-104430/client.crt: no such file or directory
net_test.go:162: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-xcphp" [b474b567-aac7-469f-b055-aa2e1a524ae1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-xcphp" [b474b567-aac7-469f-b055-aa2e1a524ae1] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.49961447s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (15.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-347180 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (16.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context custom-flannel-347180 replace --force -f testdata/netcat-deployment.yaml
net_test.go:148: (dbg) Done: kubectl --context custom-flannel-347180 replace --force -f testdata/netcat-deployment.yaml: (1.97690357s)
net_test.go:162: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-8pz6x" [9f1dbc8b-2ef6-481f-b6ec-04ed0094eb92] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-8pz6x" [9f1dbc8b-2ef6-481f-b6ec-04ed0094eb92] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 14.00943043s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (16.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:174: (dbg) Run:  kubectl --context calico-347180 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:193: (dbg) Run:  kubectl --context calico-347180 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:248: (dbg) Run:  kubectl --context calico-347180 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:174: (dbg) Run:  kubectl --context custom-flannel-347180 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:193: (dbg) Run:  kubectl --context custom-flannel-347180 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:248: (dbg) Run:  kubectl --context custom-flannel-347180 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-347180 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (14.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context false-347180 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-k2rfw" [cdc411c6-e84e-4e3f-a0bf-d99ce1fb4216] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0331 18:08:46.173393   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/gvisor-836132/client.crt: no such file or directory
helpers_test.go:344: "netcat-694fc96674-k2rfw" [cdc411c6-e84e-4e3f-a0bf-d99ce1fb4216] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 14.018158383s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (14.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (86.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-347180 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p flannel-347180 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 : (1m26.641039173s)
--- PASS: TestNetworkPlugins/group/flannel/Start (86.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:174: (dbg) Run:  kubectl --context false-347180 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:193: (dbg) Run:  kubectl --context false-347180 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:248: (dbg) Run:  kubectl --context false-347180 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (138.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-347180 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p bridge-347180 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 : (2m18.772631922s)
--- PASS: TestNetworkPlugins/group/bridge/Start (138.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-347180 "pgrep -a kubelet"
E0331 18:09:06.732383   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/functional-217220/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context enable-default-cni-347180 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-dlpsf" [9be0b0e5-723a-4874-87ea-6501ac535b1f] Pending
helpers_test.go:344: "netcat-694fc96674-dlpsf" [9be0b0e5-723a-4874-87ea-6501ac535b1f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-dlpsf" [9be0b0e5-723a-4874-87ea-6501ac535b1f] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.008600468s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (127.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-347180 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-347180 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 : (2m7.305903976s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (127.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:174: (dbg) Run:  kubectl --context enable-default-cni-347180 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:193: (dbg) Run:  kubectl --context enable-default-cni-347180 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:248: (dbg) Run:  kubectl --context enable-default-cni-347180 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (181.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-827180 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0
E0331 18:10:08.093655   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/gvisor-836132/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-827180 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0: (3m1.37316658s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (181.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-nhjsn" [c581693b-8102-4d0b-af7d-e039a5c8ab25] Running
net_test.go:119: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.032955904s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-347180 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (15.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context flannel-347180 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-8pd7x" [7e6799b3-b344-4576-87c4-f14642f547ae] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0331 18:10:33.339466   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/client.crt: no such file or directory
E0331 18:10:33.344722   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/client.crt: no such file or directory
E0331 18:10:33.354997   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/client.crt: no such file or directory
E0331 18:10:33.375296   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/client.crt: no such file or directory
E0331 18:10:33.415653   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/client.crt: no such file or directory
E0331 18:10:33.496803   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/client.crt: no such file or directory
E0331 18:10:33.657275   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/client.crt: no such file or directory
E0331 18:10:33.977707   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/client.crt: no such file or directory
E0331 18:10:34.618907   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/client.crt: no such file or directory
helpers_test.go:344: "netcat-694fc96674-8pd7x" [7e6799b3-b344-4576-87c4-f14642f547ae] Running
E0331 18:10:35.900173   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/client.crt: no such file or directory
E0331 18:10:38.461319   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/client.crt: no such file or directory
net_test.go:162: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 15.010881426s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (15.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:174: (dbg) Run:  kubectl --context flannel-347180 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:193: (dbg) Run:  kubectl --context flannel-347180 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:248: (dbg) Run:  kubectl --context flannel-347180 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (97.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-486352 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.27.0-rc.0
E0331 18:11:14.303168   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-486352 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.27.0-rc.0: (1m37.928046341s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (97.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-347180 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (14.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context bridge-347180 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-gd7cr" [44494289-0716-4829-86e8-f655ff97d739] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0331 18:11:18.660922   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/ingress-addon-legacy-757983/client.crt: no such file or directory
helpers_test.go:344: "netcat-694fc96674-gd7cr" [44494289-0716-4829-86e8-f655ff97d739] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 14.015302955s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (14.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-347180 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (12.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context kubenet-347180 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-crg4n" [f1823c5a-2cf0-4110-b68e-d72badc95b2c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-crg4n" [f1823c5a-2cf0-4110-b68e-d72badc95b2c] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 12.009310601s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (12.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:174: (dbg) Run:  kubectl --context bridge-347180 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:193: (dbg) Run:  kubectl --context bridge-347180 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:248: (dbg) Run:  kubectl --context bridge-347180 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:174: (dbg) Run:  kubectl --context kubenet-347180 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:193: (dbg) Run:  kubectl --context kubenet-347180 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:248: (dbg) Run:  kubectl --context kubenet-347180 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (81.87s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-087662 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.26.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-087662 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.26.3: (1m21.868000303s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (81.87s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (106.7s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-637694 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.26.3
E0331 18:11:54.590077   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/kindnet-347180/client.crt: no such file or directory
E0331 18:11:54.595406   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/kindnet-347180/client.crt: no such file or directory
E0331 18:11:54.605852   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/kindnet-347180/client.crt: no such file or directory
E0331 18:11:54.626158   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/kindnet-347180/client.crt: no such file or directory
E0331 18:11:54.666548   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/kindnet-347180/client.crt: no such file or directory
E0331 18:11:54.746860   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/kindnet-347180/client.crt: no such file or directory
E0331 18:11:54.907820   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/kindnet-347180/client.crt: no such file or directory
E0331 18:11:55.228505   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/kindnet-347180/client.crt: no such file or directory
E0331 18:11:55.263760   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/client.crt: no such file or directory
E0331 18:11:55.868988   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/kindnet-347180/client.crt: no such file or directory
E0331 18:11:57.149340   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/kindnet-347180/client.crt: no such file or directory
E0331 18:11:59.710433   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/kindnet-347180/client.crt: no such file or directory
E0331 18:12:04.831567   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/kindnet-347180/client.crt: no such file or directory
E0331 18:12:15.072048   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/kindnet-347180/client.crt: no such file or directory
E0331 18:12:24.249232   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/gvisor-836132/client.crt: no such file or directory
E0331 18:12:35.553930   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/kindnet-347180/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-637694 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.26.3: (1m46.70103859s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (106.70s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.52s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-486352 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a1321eff-2b0c-4cdb-9c8a-0f896b726e87] Pending
helpers_test.go:344: "busybox" [a1321eff-2b0c-4cdb-9c8a-0f896b726e87] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [a1321eff-2b0c-4cdb-9c8a-0f896b726e87] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.034240599s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-486352 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.52s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.53s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-827180 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [4015b4cc-561f-44d3-96b8-95635ce23952] Pending
helpers_test.go:344: "busybox" [4015b4cc-561f-44d3-96b8-95635ce23952] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [4015b4cc-561f-44d3-96b8-95635ce23952] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.024862718s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-827180 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.53s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.55s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-486352 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-486352 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.366190371s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-486352 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-827180 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-827180 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-827180 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-827180 --alsologtostderr -v=3: (13.424444616s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (14.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-486352 --alsologtostderr -v=3
E0331 18:12:51.934305   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/gvisor-836132/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-486352 --alsologtostderr -v=3: (14.170814369s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (14.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-827180 -n old-k8s-version-827180
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-827180 -n old-k8s-version-827180: exit status 7 (91.859333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-827180 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (448.8s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-827180 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-827180 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0: (7m28.539166958s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-827180 -n old-k8s-version-827180
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (448.80s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-486352 -n no-preload-486352
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-486352 -n no-preload-486352: exit status 7 (78.070574ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-486352 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (336.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-486352 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.27.0-rc.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-486352 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.27.0-rc.0: (5m35.778545482s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-486352 -n no-preload-486352
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (336.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.5s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-087662 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b0dab287-5cea-491e-a4ac-7921361d403f] Pending
helpers_test.go:344: "busybox" [b0dab287-5cea-491e-a4ac-7921361d403f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0331 18:13:12.229030   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/calico-347180/client.crt: no such file or directory
E0331 18:13:12.234279   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/calico-347180/client.crt: no such file or directory
E0331 18:13:12.244569   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/calico-347180/client.crt: no such file or directory
E0331 18:13:12.264837   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/calico-347180/client.crt: no such file or directory
E0331 18:13:12.305164   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/calico-347180/client.crt: no such file or directory
E0331 18:13:12.385566   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/calico-347180/client.crt: no such file or directory
E0331 18:13:12.546017   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/calico-347180/client.crt: no such file or directory
E0331 18:13:12.866499   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/calico-347180/client.crt: no such file or directory
E0331 18:13:13.507063   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/calico-347180/client.crt: no such file or directory
E0331 18:13:14.788064   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/calico-347180/client.crt: no such file or directory
helpers_test.go:344: "busybox" [b0dab287-5cea-491e-a4ac-7921361d403f] Running
E0331 18:13:15.615522   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/ingress-addon-legacy-757983/client.crt: no such file or directory
E0331 18:13:16.514801   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/kindnet-347180/client.crt: no such file or directory
E0331 18:13:17.184156   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/client.crt: no such file or directory
E0331 18:13:17.349118   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/calico-347180/client.crt: no such file or directory
E0331 18:13:18.151245   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/addons-104430/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.02901133s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-087662 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.50s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.85s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-087662 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0331 18:13:21.122570   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/custom-flannel-347180/client.crt: no such file or directory
E0331 18:13:21.127829   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/custom-flannel-347180/client.crt: no such file or directory
E0331 18:13:21.138043   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/custom-flannel-347180/client.crt: no such file or directory
E0331 18:13:21.158347   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/custom-flannel-347180/client.crt: no such file or directory
E0331 18:13:21.198667   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/custom-flannel-347180/client.crt: no such file or directory
E0331 18:13:21.279013   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/custom-flannel-347180/client.crt: no such file or directory
E0331 18:13:21.439351   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/custom-flannel-347180/client.crt: no such file or directory
E0331 18:13:21.760216   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/custom-flannel-347180/client.crt: no such file or directory
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-087662 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.85s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (15.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-087662 --alsologtostderr -v=3
E0331 18:13:22.401147   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/custom-flannel-347180/client.crt: no such file or directory
E0331 18:13:22.469310   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/calico-347180/client.crt: no such file or directory
E0331 18:13:23.682331   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/custom-flannel-347180/client.crt: no such file or directory
E0331 18:13:26.242593   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/custom-flannel-347180/client.crt: no such file or directory
E0331 18:13:31.363164   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/custom-flannel-347180/client.crt: no such file or directory
E0331 18:13:32.710461   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/calico-347180/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-087662 --alsologtostderr -v=3: (15.147868585s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (15.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-087662 -n embed-certs-087662
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-087662 -n embed-certs-087662: exit status 7 (85.456089ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-087662 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (323.86s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-087662 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.26.3
E0331 18:13:39.542121   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/false-347180/client.crt: no such file or directory
E0331 18:13:39.547445   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/false-347180/client.crt: no such file or directory
E0331 18:13:39.557768   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/false-347180/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-087662 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.26.3: (5m23.555249372s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-087662 -n embed-certs-087662
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (323.86s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (12.51s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-637694 create -f testdata/busybox.yaml
E0331 18:13:39.578306   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/false-347180/client.crt: no such file or directory
E0331 18:13:39.619067   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/false-347180/client.crt: no such file or directory
E0331 18:13:39.699571   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/false-347180/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
E0331 18:13:39.860410   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/false-347180/client.crt: no such file or directory
helpers_test.go:344: "busybox" [b77a8467-8b20-48dd-83ff-81f29e3e7569] Pending
E0331 18:13:40.180649   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/false-347180/client.crt: no such file or directory
E0331 18:13:40.820908   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/false-347180/client.crt: no such file or directory
helpers_test.go:344: "busybox" [b77a8467-8b20-48dd-83ff-81f29e3e7569] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0331 18:13:41.604138   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/custom-flannel-347180/client.crt: no such file or directory
E0331 18:13:42.101086   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/false-347180/client.crt: no such file or directory
E0331 18:13:44.661705   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/false-347180/client.crt: no such file or directory
helpers_test.go:344: "busybox" [b77a8467-8b20-48dd-83ff-81f29e3e7569] Running
E0331 18:13:49.782102   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/false-347180/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 12.015453664s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-637694 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (12.51s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-637694 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-637694 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (13.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-637694 --alsologtostderr -v=3
E0331 18:13:53.191281   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/calico-347180/client.crt: no such file or directory
E0331 18:14:00.023093   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/false-347180/client.crt: no such file or directory
E0331 18:14:02.085079   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/custom-flannel-347180/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-637694 --alsologtostderr -v=3: (13.17392897s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (13.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-637694 -n default-k8s-diff-port-637694
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-637694 -n default-k8s-diff-port-637694: exit status 7 (80.946019ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-637694 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (314.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-637694 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.26.3
E0331 18:14:06.731780   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/functional-217220/client.crt: no such file or directory
E0331 18:14:07.100772   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/enable-default-cni-347180/client.crt: no such file or directory
E0331 18:14:07.106085   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/enable-default-cni-347180/client.crt: no such file or directory
E0331 18:14:07.116446   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/enable-default-cni-347180/client.crt: no such file or directory
E0331 18:14:07.136748   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/enable-default-cni-347180/client.crt: no such file or directory
E0331 18:14:07.177927   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/enable-default-cni-347180/client.crt: no such file or directory
E0331 18:14:07.258244   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/enable-default-cni-347180/client.crt: no such file or directory
E0331 18:14:07.418717   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/enable-default-cni-347180/client.crt: no such file or directory
E0331 18:14:07.739118   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/enable-default-cni-347180/client.crt: no such file or directory
E0331 18:14:08.380098   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/enable-default-cni-347180/client.crt: no such file or directory
E0331 18:14:09.660897   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/enable-default-cni-347180/client.crt: no such file or directory
E0331 18:14:12.221722   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/enable-default-cni-347180/client.crt: no such file or directory
E0331 18:14:17.342827   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/enable-default-cni-347180/client.crt: no such file or directory
E0331 18:14:20.504225   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/false-347180/client.crt: no such file or directory
E0331 18:14:27.583402   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/enable-default-cni-347180/client.crt: no such file or directory
E0331 18:14:34.152206   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/calico-347180/client.crt: no such file or directory
E0331 18:14:38.435902   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/kindnet-347180/client.crt: no such file or directory
E0331 18:14:43.045471   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/custom-flannel-347180/client.crt: no such file or directory
E0331 18:14:48.064343   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/enable-default-cni-347180/client.crt: no such file or directory
E0331 18:15:01.465124   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/false-347180/client.crt: no such file or directory
E0331 18:15:19.650977   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/flannel-347180/client.crt: no such file or directory
E0331 18:15:19.656278   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/flannel-347180/client.crt: no such file or directory
E0331 18:15:19.666613   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/flannel-347180/client.crt: no such file or directory
E0331 18:15:19.686918   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/flannel-347180/client.crt: no such file or directory
E0331 18:15:19.727960   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/flannel-347180/client.crt: no such file or directory
E0331 18:15:19.808278   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/flannel-347180/client.crt: no such file or directory
E0331 18:15:19.968732   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/flannel-347180/client.crt: no such file or directory
E0331 18:15:20.289317   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/flannel-347180/client.crt: no such file or directory
E0331 18:15:20.929862   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/flannel-347180/client.crt: no such file or directory
E0331 18:15:22.210519   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/flannel-347180/client.crt: no such file or directory
E0331 18:15:24.770664   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/flannel-347180/client.crt: no such file or directory
E0331 18:15:29.025338   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/enable-default-cni-347180/client.crt: no such file or directory
E0331 18:15:29.891254   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/flannel-347180/client.crt: no such file or directory
E0331 18:15:33.339133   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/client.crt: no such file or directory
E0331 18:15:40.131919   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/flannel-347180/client.crt: no such file or directory
E0331 18:15:56.072917   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/calico-347180/client.crt: no such file or directory
E0331 18:16:00.612563   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/flannel-347180/client.crt: no such file or directory
E0331 18:16:01.025078   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/client.crt: no such file or directory
E0331 18:16:04.966614   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/custom-flannel-347180/client.crt: no such file or directory
E0331 18:16:15.185748   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/bridge-347180/client.crt: no such file or directory
E0331 18:16:15.191139   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/bridge-347180/client.crt: no such file or directory
E0331 18:16:15.201441   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/bridge-347180/client.crt: no such file or directory
E0331 18:16:15.221742   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/bridge-347180/client.crt: no such file or directory
E0331 18:16:15.262063   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/bridge-347180/client.crt: no such file or directory
E0331 18:16:15.342381   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/bridge-347180/client.crt: no such file or directory
E0331 18:16:15.502813   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/bridge-347180/client.crt: no such file or directory
E0331 18:16:15.823116   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/bridge-347180/client.crt: no such file or directory
E0331 18:16:16.464224   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/bridge-347180/client.crt: no such file or directory
E0331 18:16:17.744597   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/bridge-347180/client.crt: no such file or directory
E0331 18:16:19.210041   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/kubenet-347180/client.crt: no such file or directory
E0331 18:16:19.215868   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/kubenet-347180/client.crt: no such file or directory
E0331 18:16:19.226197   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/kubenet-347180/client.crt: no such file or directory
E0331 18:16:19.246485   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/kubenet-347180/client.crt: no such file or directory
E0331 18:16:19.286866   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/kubenet-347180/client.crt: no such file or directory
E0331 18:16:19.367190   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/kubenet-347180/client.crt: no such file or directory
E0331 18:16:19.527746   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/kubenet-347180/client.crt: no such file or directory
E0331 18:16:19.848525   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/kubenet-347180/client.crt: no such file or directory
E0331 18:16:20.305577   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/bridge-347180/client.crt: no such file or directory
E0331 18:16:20.488919   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/kubenet-347180/client.crt: no such file or directory
E0331 18:16:21.769074   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/kubenet-347180/client.crt: no such file or directory
E0331 18:16:23.385925   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/false-347180/client.crt: no such file or directory
E0331 18:16:24.329502   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/kubenet-347180/client.crt: no such file or directory
E0331 18:16:25.426378   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/bridge-347180/client.crt: no such file or directory
E0331 18:16:29.450453   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/kubenet-347180/client.crt: no such file or directory
E0331 18:16:35.666764   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/bridge-347180/client.crt: no such file or directory
E0331 18:16:39.690932   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/kubenet-347180/client.crt: no such file or directory
E0331 18:16:41.573631   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/flannel-347180/client.crt: no such file or directory
E0331 18:16:44.812521   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/skaffold-531248/client.crt: no such file or directory
E0331 18:16:50.945931   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/enable-default-cni-347180/client.crt: no such file or directory
E0331 18:16:54.590107   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/kindnet-347180/client.crt: no such file or directory
E0331 18:16:56.147462   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/bridge-347180/client.crt: no such file or directory
E0331 18:17:00.171183   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/kubenet-347180/client.crt: no such file or directory
E0331 18:17:22.276099   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/kindnet-347180/client.crt: no such file or directory
E0331 18:17:24.249519   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/gvisor-836132/client.crt: no such file or directory
E0331 18:17:37.107923   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/bridge-347180/client.crt: no such file or directory
E0331 18:17:41.131361   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/kubenet-347180/client.crt: no such file or directory
E0331 18:18:01.201195   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/addons-104430/client.crt: no such file or directory
E0331 18:18:03.494672   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/flannel-347180/client.crt: no such file or directory
E0331 18:18:07.857838   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/skaffold-531248/client.crt: no such file or directory
E0331 18:18:12.229674   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/calico-347180/client.crt: no such file or directory
E0331 18:18:15.615570   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/ingress-addon-legacy-757983/client.crt: no such file or directory
E0331 18:18:18.151197   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/addons-104430/client.crt: no such file or directory
E0331 18:18:21.123236   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/custom-flannel-347180/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-637694 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.26.3: (5m14.10087701s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-637694 -n default-k8s-diff-port-637694
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (314.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-mks9q" [97b504d6-51bc-4718-83a7-f6527e8e49f1] Running
E0331 18:18:39.541993   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/false-347180/client.crt: no such file or directory
E0331 18:18:39.913920   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/calico-347180/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.01796487s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-mks9q" [97b504d6-51bc-4718-83a7-f6527e8e49f1] Running
E0331 18:18:48.806978   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/custom-flannel-347180/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009009634s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-486352 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-486352 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.96s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-486352 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-486352 -n no-preload-486352
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-486352 -n no-preload-486352: exit status 2 (281.08563ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-486352 -n no-preload-486352
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-486352 -n no-preload-486352: exit status 2 (282.797395ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-486352 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-486352 -n no-preload-486352
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-486352 -n no-preload-486352
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.96s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (76.65s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-961446 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.27.0-rc.0
E0331 18:18:59.028568   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/bridge-347180/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-961446 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.27.0-rc.0: (1m16.653749122s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (76.65s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-89hrj" [85b0ffbb-591f-4c77-b38b-4927dae6ac24] Running
E0331 18:19:03.052503   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/kubenet-347180/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.018879024s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-89hrj" [85b0ffbb-591f-4c77-b38b-4927dae6ac24] Running
E0331 18:19:06.731770   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/functional-217220/client.crt: no such file or directory
E0331 18:19:07.100602   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/enable-default-cni-347180/client.crt: no such file or directory
E0331 18:19:07.226822   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/false-347180/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008667708s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-087662 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-087662 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.69s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-087662 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-087662 -n embed-certs-087662
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-087662 -n embed-certs-087662: exit status 2 (247.204447ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-087662 -n embed-certs-087662
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-087662 -n embed-certs-087662: exit status 2 (254.554234ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-087662 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-087662 -n embed-certs-087662
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-087662 -n embed-certs-087662
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.69s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-8672f" [424da787-6524-45dc-84ff-9f6d39787acb] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.019050569s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-8672f" [424da787-6524-45dc-84ff-9f6d39787acb] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009649935s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-637694 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-diff-port-637694 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.99s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-637694 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-637694 -n default-k8s-diff-port-637694
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-637694 -n default-k8s-diff-port-637694: exit status 2 (247.804578ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-637694 -n default-k8s-diff-port-637694
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-637694 -n default-k8s-diff-port-637694: exit status 2 (286.364457ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-637694 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-637694 -n default-k8s-diff-port-637694
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-637694 -n default-k8s-diff-port-637694
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.99s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.03s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-961446 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-961446 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.025630125s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.03s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (8.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-961446 --alsologtostderr -v=3
E0331 18:20:19.651069   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/flannel-347180/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-961446 --alsologtostderr -v=3: (8.10565419s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (8.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-961446 -n newest-cni-961446
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-961446 -n newest-cni-961446: exit status 7 (55.832134ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-961446 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (47.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-961446 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.27.0-rc.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-961446 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.27.0-rc.0: (46.986435039s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-961446 -n newest-cni-961446
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (47.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-fj68j" [42bcfc09-fc3e-4cac-b447-5a506ba68371] Running
E0331 18:20:33.339207   10540 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.01709008s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-fj68j" [42bcfc09-fc3e-4cac-b447-5a506ba68371] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008685109s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-827180 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-827180 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-827180 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-827180 -n old-k8s-version-827180
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-827180 -n old-k8s-version-827180: exit status 2 (233.684282ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-827180 -n old-k8s-version-827180
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-827180 -n old-k8s-version-827180: exit status 2 (241.056969ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-827180 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-827180 -n old-k8s-version-827180
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-827180 -n old-k8s-version-827180
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.44s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-961446 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-961446 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-961446 -n newest-cni-961446
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-961446 -n newest-cni-961446: exit status 2 (224.445289ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-961446 -n newest-cni-961446
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-961446 -n newest-cni-961446: exit status 2 (232.739072ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-961446 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-961446 -n newest-cni-961446
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-961446 -n newest-cni-961446
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.26s)

                                                
                                    

Test skip (33/312)

Order skiped test Duration
5 TestDownloadOnly/v1.16.0/cached-images 0
6 TestDownloadOnly/v1.16.0/binaries 0
7 TestDownloadOnly/v1.16.0/kubectl 0
12 TestDownloadOnly/v1.26.3/cached-images 0
13 TestDownloadOnly/v1.26.3/binaries 0
14 TestDownloadOnly/v1.26.3/kubectl 0
19 TestDownloadOnly/v1.27.0-rc.0/cached-images 0
20 TestDownloadOnly/v1.27.0-rc.0/binaries 0
21 TestDownloadOnly/v1.27.0-rc.0/kubectl 0
25 TestDownloadOnlyKic 0
35 TestAddons/parallel/Olm 0
49 TestHyperKitDriverInstallOrUpdate 0
50 TestHyperkitDriverSkipUpgrade 0
101 TestFunctional/parallel/PodmanEnv 0
117 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
118 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.04
119 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
121 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.03
122 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
123 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.04
156 TestImageBuild/serial/validateImageBuildWithBuildEnv 0
189 TestKicCustomNetwork 0
190 TestKicExistingNetwork 0
191 TestKicCustomSubnet 0
192 TestKicStaticIP 0
223 TestChangeNoneUser 0
226 TestScheduledStopWindows 0
230 TestInsufficientStorage 0
234 TestMissingContainerUpgrade 0
245 TestNetworkPlugins/group/cilium 3.47
256 TestStartStop/group/disable-driver-mounts 0.64
x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.3/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.26.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.3/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.26.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.3/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.26.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.0-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.0-rc.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.27.0-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.0-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.0-rc.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.27.0-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.0-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.0-rc.0/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.27.0-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:214: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:473: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:545: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.04s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:109: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Only test none driver.
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:292: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:101: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-347180 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-347180

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-347180

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-347180

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-347180

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-347180

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-347180

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-347180

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-347180

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-347180

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-347180

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-347180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-347180"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-347180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-347180"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-347180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-347180"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-347180

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-347180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-347180"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-347180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-347180"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-347180" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-347180" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-347180" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-347180" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-347180" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-347180" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-347180" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-347180" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-347180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-347180"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-347180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-347180"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-347180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-347180"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-347180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-347180"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-347180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-347180"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-347180

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-347180

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-347180" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-347180" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-347180

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-347180

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-347180" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-347180" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-347180" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-347180" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-347180" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-347180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-347180"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-347180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-347180"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-347180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-347180"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-347180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-347180"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-347180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-347180"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-347180

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-347180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-347180"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-347180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-347180"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-347180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-347180"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-347180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-347180"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-347180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-347180"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-347180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-347180"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-347180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-347180"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-347180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-347180"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-347180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-347180"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-347180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-347180"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-347180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-347180"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-347180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-347180"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-347180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-347180"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-347180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-347180"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-347180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-347180"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-347180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-347180"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-347180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-347180"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-347180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-347180"

                                                
                                                
----------------------- debugLogs end: cilium-347180 [took: 3.058089929s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-347180" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-347180
--- SKIP: TestNetworkPlugins/group/cilium (3.47s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.64s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-423430" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-423430
--- SKIP: TestStartStop/group/disable-driver-mounts (0.64s)

                                                
                                    
Copied to clipboard