Test Report: KVM_Linux 17314

                    
                      720b04249cd58de6fa013ef84ee34e212d9c3117:2023-10-06:31319
                    
                

Test fail (2/320)

Order failed test Duration
307 TestNoKubernetes/serial/StartNoArgs 19.69
387 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 1.9
x
+
TestNoKubernetes/serial/StartNoArgs (19.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-124473 --driver=kvm2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-124473 --driver=kvm2 : signal: killed (19.592714799s)

                                                
                                                
-- stdout --
	* [NoKubernetes-124473] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17314
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17314-68418/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17314-68418/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-124473

                                                
                                                
-- /stdout --
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-linux-amd64 start -p NoKubernetes-124473 --driver=kvm2 " : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p NoKubernetes-124473 -n NoKubernetes-124473
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p NoKubernetes-124473 -n NoKubernetes-124473: exit status 7 (97.087322ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-124473" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (19.69s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (1.9s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-456697 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p old-k8s-version-456697 "sudo crictl images -o json": exit status 1 (230.46083ms)

                                                
                                                
-- stdout --
	FATA[0000] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/dockershim.sock": rpc error: code = Unimplemented desc = unknown service runtime.v1.ImageService 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-linux-amd64 ssh -p old-k8s-version-456697 \"sudo crictl images -o json\"": exit status 1
start_stop_delete_test.go:304: failed to decode images json invalid character '\x1b' looking for beginning of value. output:
FATA[0000] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/dockershim.sock": rpc error: code = Unimplemented desc = unknown service runtime.v1.ImageService 
start_stop_delete_test.go:304: v1.16.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.6.2",
- 	"k8s.gcr.io/etcd:3.3.15-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.16.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.16.0",
- 	"k8s.gcr.io/kube-proxy:v1.16.0",
- 	"k8s.gcr.io/kube-scheduler:v1.16.0",
- 	"k8s.gcr.io/pause:3.1",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-456697 -n old-k8s-version-456697
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-456697 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| pause   | -p no-preload-009149                                   | no-preload-009149            | jenkins | v1.31.2 | 06 Oct 23 01:40 UTC | 06 Oct 23 01:40 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-009149                                   | no-preload-009149            | jenkins | v1.31.2 | 06 Oct 23 01:40 UTC | 06 Oct 23 01:40 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-009149                                   | no-preload-009149            | jenkins | v1.31.2 | 06 Oct 23 01:40 UTC | 06 Oct 23 01:40 UTC |
	| delete  | -p no-preload-009149                                   | no-preload-009149            | jenkins | v1.31.2 | 06 Oct 23 01:40 UTC | 06 Oct 23 01:40 UTC |
	| start   | -p newest-cni-516412 --memory=2200 --alsologtostderr   | newest-cni-516412            | jenkins | v1.31.2 | 06 Oct 23 01:40 UTC | 06 Oct 23 01:41 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --kubernetes-version=v1.28.2            |                              |         |         |                     |                     |
	| ssh     | -p                                                     | default-k8s-diff-port-987060 | jenkins | v1.31.2 | 06 Oct 23 01:41 UTC | 06 Oct 23 01:41 UTC |
	|         | default-k8s-diff-port-987060                           |                              |         |         |                     |                     |
	|         | sudo crictl images -o json                             |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-987060 | jenkins | v1.31.2 | 06 Oct 23 01:41 UTC | 06 Oct 23 01:41 UTC |
	|         | default-k8s-diff-port-987060                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-987060 | jenkins | v1.31.2 | 06 Oct 23 01:41 UTC | 06 Oct 23 01:41 UTC |
	|         | default-k8s-diff-port-987060                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-987060 | jenkins | v1.31.2 | 06 Oct 23 01:41 UTC | 06 Oct 23 01:41 UTC |
	|         | default-k8s-diff-port-987060                           |                              |         |         |                     |                     |
	| ssh     | -p embed-certs-150489 sudo                             | embed-certs-150489           | jenkins | v1.31.2 | 06 Oct 23 01:41 UTC | 06 Oct 23 01:41 UTC |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-987060 | jenkins | v1.31.2 | 06 Oct 23 01:41 UTC | 06 Oct 23 01:41 UTC |
	|         | default-k8s-diff-port-987060                           |                              |         |         |                     |                     |
	| pause   | -p embed-certs-150489                                  | embed-certs-150489           | jenkins | v1.31.2 | 06 Oct 23 01:41 UTC | 06 Oct 23 01:41 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-150489                                  | embed-certs-150489           | jenkins | v1.31.2 | 06 Oct 23 01:41 UTC | 06 Oct 23 01:41 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-150489                                  | embed-certs-150489           | jenkins | v1.31.2 | 06 Oct 23 01:41 UTC | 06 Oct 23 01:41 UTC |
	| delete  | -p embed-certs-150489                                  | embed-certs-150489           | jenkins | v1.31.2 | 06 Oct 23 01:41 UTC | 06 Oct 23 01:41 UTC |
	| addons  | enable metrics-server -p newest-cni-516412             | newest-cni-516412            | jenkins | v1.31.2 | 06 Oct 23 01:41 UTC | 06 Oct 23 01:41 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-516412                                   | newest-cni-516412            | jenkins | v1.31.2 | 06 Oct 23 01:41 UTC | 06 Oct 23 01:41 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-516412                  | newest-cni-516412            | jenkins | v1.31.2 | 06 Oct 23 01:41 UTC | 06 Oct 23 01:41 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-516412 --memory=2200 --alsologtostderr   | newest-cni-516412            | jenkins | v1.31.2 | 06 Oct 23 01:41 UTC | 06 Oct 23 01:42 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --kubernetes-version=v1.28.2            |                              |         |         |                     |                     |
	| ssh     | -p newest-cni-516412 sudo                              | newest-cni-516412            | jenkins | v1.31.2 | 06 Oct 23 01:42 UTC | 06 Oct 23 01:42 UTC |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	| pause   | -p newest-cni-516412                                   | newest-cni-516412            | jenkins | v1.31.2 | 06 Oct 23 01:42 UTC | 06 Oct 23 01:42 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-516412                                   | newest-cni-516412            | jenkins | v1.31.2 | 06 Oct 23 01:42 UTC | 06 Oct 23 01:42 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-516412                                   | newest-cni-516412            | jenkins | v1.31.2 | 06 Oct 23 01:42 UTC | 06 Oct 23 01:42 UTC |
	| delete  | -p newest-cni-516412                                   | newest-cni-516412            | jenkins | v1.31.2 | 06 Oct 23 01:42 UTC | 06 Oct 23 01:42 UTC |
	| ssh     | -p old-k8s-version-456697 sudo                         | old-k8s-version-456697       | jenkins | v1.31.2 | 06 Oct 23 01:42 UTC |                     |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/06 01:41:50
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 01:41:50.266009  123852 out.go:296] Setting OutFile to fd 1 ...
	I1006 01:41:50.266152  123852 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1006 01:41:50.266162  123852 out.go:309] Setting ErrFile to fd 2...
	I1006 01:41:50.266166  123852 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1006 01:41:50.266359  123852 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17314-68418/.minikube/bin
	I1006 01:41:50.266944  123852 out.go:303] Setting JSON to false
	I1006 01:41:50.267875  123852 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":8663,"bootTime":1696547847,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1006 01:41:50.267940  123852 start.go:138] virtualization: kvm guest
	I1006 01:41:50.270427  123852 out.go:177] * [newest-cni-516412] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1006 01:41:50.271835  123852 out.go:177]   - MINIKUBE_LOCATION=17314
	I1006 01:41:50.271868  123852 notify.go:220] Checking for updates...
	I1006 01:41:50.273258  123852 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 01:41:50.274776  123852 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17314-68418/kubeconfig
	I1006 01:41:50.276431  123852 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17314-68418/.minikube
	I1006 01:41:50.277929  123852 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1006 01:41:50.280323  123852 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 01:41:50.282399  123852 config.go:182] Loaded profile config "newest-cni-516412": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1006 01:41:50.283079  123852 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1006 01:41:50.283150  123852 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 01:41:50.298414  123852 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43529
	I1006 01:41:50.298841  123852 main.go:141] libmachine: () Calling .GetVersion
	I1006 01:41:50.299434  123852 main.go:141] libmachine: Using API Version  1
	I1006 01:41:50.299461  123852 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 01:41:50.299810  123852 main.go:141] libmachine: () Calling .GetMachineName
	I1006 01:41:50.300047  123852 main.go:141] libmachine: (newest-cni-516412) Calling .DriverName
	I1006 01:41:50.300303  123852 driver.go:378] Setting default libvirt URI to qemu:///system
	I1006 01:41:50.300634  123852 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1006 01:41:50.300682  123852 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 01:41:50.315120  123852 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37967
	I1006 01:41:50.315566  123852 main.go:141] libmachine: () Calling .GetVersion
	I1006 01:41:50.316105  123852 main.go:141] libmachine: Using API Version  1
	I1006 01:41:50.316129  123852 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 01:41:50.316444  123852 main.go:141] libmachine: () Calling .GetMachineName
	I1006 01:41:50.316661  123852 main.go:141] libmachine: (newest-cni-516412) Calling .DriverName
	I1006 01:41:50.352393  123852 out.go:177] * Using the kvm2 driver based on existing profile
	I1006 01:41:50.353797  123852 start.go:298] selected driver: kvm2
	I1006 01:41:50.353814  123852 start.go:902] validating driver "kvm2" against &{Name:newest-cni-516412 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.2 ClusterName:newest-cni-516412 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.107 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready
:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1006 01:41:50.353951  123852 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 01:41:50.354681  123852 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 01:41:50.354776  123852 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17314-68418/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1006 01:41:50.370174  123852 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I1006 01:41:50.370631  123852 start_flags.go:945] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1006 01:41:50.370713  123852 cni.go:84] Creating CNI manager for ""
	I1006 01:41:50.370748  123852 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1006 01:41:50.370766  123852 start_flags.go:323] config:
	{Name:newest-cni-516412 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:newest-cni-516412 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.107 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPort
s:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1006 01:41:50.370990  123852 iso.go:125] acquiring lock: {Name:mk09b1b55bb2317f3231832cf8a32146ecf7bf7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 01:41:50.372675  123852 out.go:177] * Starting control plane node newest-cni-516412 in cluster newest-cni-516412
	I1006 01:41:50.373903  123852 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1006 01:41:50.373954  123852 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17314-68418/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4
	I1006 01:41:50.373970  123852 cache.go:57] Caching tarball of preloaded images
	I1006 01:41:50.374090  123852 preload.go:174] Found /home/jenkins/minikube-integration/17314-68418/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1006 01:41:50.374103  123852 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1006 01:41:50.374270  123852 profile.go:148] Saving config to /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/newest-cni-516412/config.json ...
	I1006 01:41:50.374525  123852 start.go:365] acquiring machines lock for newest-cni-516412: {Name:mkcaed0eb12b04929d3c9fe113bd3de3e3030e54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1006 01:41:50.374585  123852 start.go:369] acquired machines lock for "newest-cni-516412" in 34.302µs
	I1006 01:41:50.374604  123852 start.go:96] Skipping create...Using existing machine configuration
	I1006 01:41:50.374615  123852 fix.go:54] fixHost starting: 
	I1006 01:41:50.374948  123852 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1006 01:41:50.374990  123852 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 01:41:50.389599  123852 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39633
	I1006 01:41:50.390133  123852 main.go:141] libmachine: () Calling .GetVersion
	I1006 01:41:50.390618  123852 main.go:141] libmachine: Using API Version  1
	I1006 01:41:50.390642  123852 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 01:41:50.391093  123852 main.go:141] libmachine: () Calling .GetMachineName
	I1006 01:41:50.391285  123852 main.go:141] libmachine: (newest-cni-516412) Calling .DriverName
	I1006 01:41:50.391458  123852 main.go:141] libmachine: (newest-cni-516412) Calling .GetState
	I1006 01:41:50.393333  123852 fix.go:102] recreateIfNeeded on newest-cni-516412: state=Stopped err=<nil>
	I1006 01:41:50.393362  123852 main.go:141] libmachine: (newest-cni-516412) Calling .DriverName
	W1006 01:41:50.393583  123852 fix.go:128] unexpected machine state, will restart: <nil>
	I1006 01:41:50.395565  123852 out.go:177] * Restarting existing kvm2 VM for "newest-cni-516412" ...
	I1006 01:41:50.396928  123852 main.go:141] libmachine: (newest-cni-516412) Calling .Start
	I1006 01:41:50.397161  123852 main.go:141] libmachine: (newest-cni-516412) Ensuring networks are active...
	I1006 01:41:50.398019  123852 main.go:141] libmachine: (newest-cni-516412) Ensuring network default is active
	I1006 01:41:50.398351  123852 main.go:141] libmachine: (newest-cni-516412) Ensuring network mk-newest-cni-516412 is active
	I1006 01:41:50.398829  123852 main.go:141] libmachine: (newest-cni-516412) Getting domain xml...
	I1006 01:41:50.399569  123852 main.go:141] libmachine: (newest-cni-516412) Creating domain...
	I1006 01:41:51.648091  123852 main.go:141] libmachine: (newest-cni-516412) Waiting to get IP...
	I1006 01:41:51.649131  123852 main.go:141] libmachine: (newest-cni-516412) DBG | domain newest-cni-516412 has defined MAC address 52:54:00:3b:3a:c7 in network mk-newest-cni-516412
	I1006 01:41:51.649651  123852 main.go:141] libmachine: (newest-cni-516412) DBG | unable to find current IP address of domain newest-cni-516412 in network mk-newest-cni-516412
	I1006 01:41:51.649767  123852 main.go:141] libmachine: (newest-cni-516412) DBG | I1006 01:41:51.649638  123887 retry.go:31] will retry after 193.484028ms: waiting for machine to come up
	I1006 01:41:51.845226  123852 main.go:141] libmachine: (newest-cni-516412) DBG | domain newest-cni-516412 has defined MAC address 52:54:00:3b:3a:c7 in network mk-newest-cni-516412
	I1006 01:41:51.845819  123852 main.go:141] libmachine: (newest-cni-516412) DBG | unable to find current IP address of domain newest-cni-516412 in network mk-newest-cni-516412
	I1006 01:41:51.845847  123852 main.go:141] libmachine: (newest-cni-516412) DBG | I1006 01:41:51.845750  123887 retry.go:31] will retry after 359.108425ms: waiting for machine to come up
	I1006 01:41:52.206298  123852 main.go:141] libmachine: (newest-cni-516412) DBG | domain newest-cni-516412 has defined MAC address 52:54:00:3b:3a:c7 in network mk-newest-cni-516412
	I1006 01:41:52.206896  123852 main.go:141] libmachine: (newest-cni-516412) DBG | unable to find current IP address of domain newest-cni-516412 in network mk-newest-cni-516412
	I1006 01:41:52.206928  123852 main.go:141] libmachine: (newest-cni-516412) DBG | I1006 01:41:52.206830  123887 retry.go:31] will retry after 385.713797ms: waiting for machine to come up
	I1006 01:41:52.594436  123852 main.go:141] libmachine: (newest-cni-516412) DBG | domain newest-cni-516412 has defined MAC address 52:54:00:3b:3a:c7 in network mk-newest-cni-516412
	I1006 01:41:52.594983  123852 main.go:141] libmachine: (newest-cni-516412) DBG | unable to find current IP address of domain newest-cni-516412 in network mk-newest-cni-516412
	I1006 01:41:52.595015  123852 main.go:141] libmachine: (newest-cni-516412) DBG | I1006 01:41:52.594930  123887 retry.go:31] will retry after 581.958545ms: waiting for machine to come up
	I1006 01:41:53.178949  123852 main.go:141] libmachine: (newest-cni-516412) DBG | domain newest-cni-516412 has defined MAC address 52:54:00:3b:3a:c7 in network mk-newest-cni-516412
	I1006 01:41:53.179554  123852 main.go:141] libmachine: (newest-cni-516412) DBG | unable to find current IP address of domain newest-cni-516412 in network mk-newest-cni-516412
	I1006 01:41:53.179586  123852 main.go:141] libmachine: (newest-cni-516412) DBG | I1006 01:41:53.179497  123887 retry.go:31] will retry after 580.744285ms: waiting for machine to come up
	I1006 01:41:53.762387  123852 main.go:141] libmachine: (newest-cni-516412) DBG | domain newest-cni-516412 has defined MAC address 52:54:00:3b:3a:c7 in network mk-newest-cni-516412
	I1006 01:41:53.763017  123852 main.go:141] libmachine: (newest-cni-516412) DBG | unable to find current IP address of domain newest-cni-516412 in network mk-newest-cni-516412
	I1006 01:41:53.763077  123852 main.go:141] libmachine: (newest-cni-516412) DBG | I1006 01:41:53.762952  123887 retry.go:31] will retry after 714.23457ms: waiting for machine to come up
	I1006 01:41:54.478335  123852 main.go:141] libmachine: (newest-cni-516412) DBG | domain newest-cni-516412 has defined MAC address 52:54:00:3b:3a:c7 in network mk-newest-cni-516412
	I1006 01:41:54.478981  123852 main.go:141] libmachine: (newest-cni-516412) DBG | unable to find current IP address of domain newest-cni-516412 in network mk-newest-cni-516412
	I1006 01:41:54.479015  123852 main.go:141] libmachine: (newest-cni-516412) DBG | I1006 01:41:54.478909  123887 retry.go:31] will retry after 1.140304254s: waiting for machine to come up
	I1006 01:41:54.738425  120254 system_pods.go:86] 5 kube-system pods found
	I1006 01:41:54.738459  120254 system_pods.go:89] "coredns-5644d7b6d9-4pfcz" [56d0d597-ff05-4887-9112-6509320988bb] Running
	I1006 01:41:54.738469  120254 system_pods.go:89] "kube-controller-manager-old-k8s-version-456697" [4fb9c67b-ceae-4ef3-b74e-7e36ca9b5984] Running
	I1006 01:41:54.738476  120254 system_pods.go:89] "kube-proxy-9h6k5" [4302798d-698e-435d-bdb3-ff2d185bfd97] Running
	I1006 01:41:54.738502  120254 system_pods.go:89] "metrics-server-74d5856cc6-72rfp" [96e0ee39-d033-4749-94de-7dc5895a0ba1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1006 01:41:54.738515  120254 system_pods.go:89] "storage-provisioner" [ba75e9ab-d81f-4495-98e1-1ac980f95b9b] Running
	I1006 01:41:54.738538  120254 retry.go:31] will retry after 10.402849205s: missing components: etcd, kube-apiserver, kube-scheduler
	I1006 01:41:55.620694  123852 main.go:141] libmachine: (newest-cni-516412) DBG | domain newest-cni-516412 has defined MAC address 52:54:00:3b:3a:c7 in network mk-newest-cni-516412
	I1006 01:41:55.621289  123852 main.go:141] libmachine: (newest-cni-516412) DBG | unable to find current IP address of domain newest-cni-516412 in network mk-newest-cni-516412
	I1006 01:41:55.621333  123852 main.go:141] libmachine: (newest-cni-516412) DBG | I1006 01:41:55.621265  123887 retry.go:31] will retry after 1.132030488s: waiting for machine to come up
	I1006 01:41:56.754593  123852 main.go:141] libmachine: (newest-cni-516412) DBG | domain newest-cni-516412 has defined MAC address 52:54:00:3b:3a:c7 in network mk-newest-cni-516412
	I1006 01:41:56.755169  123852 main.go:141] libmachine: (newest-cni-516412) DBG | unable to find current IP address of domain newest-cni-516412 in network mk-newest-cni-516412
	I1006 01:41:56.755197  123852 main.go:141] libmachine: (newest-cni-516412) DBG | I1006 01:41:56.755118  123887 retry.go:31] will retry after 1.383007461s: waiting for machine to come up
	I1006 01:41:58.139543  123852 main.go:141] libmachine: (newest-cni-516412) DBG | domain newest-cni-516412 has defined MAC address 52:54:00:3b:3a:c7 in network mk-newest-cni-516412
	I1006 01:41:58.140083  123852 main.go:141] libmachine: (newest-cni-516412) DBG | unable to find current IP address of domain newest-cni-516412 in network mk-newest-cni-516412
	I1006 01:41:58.140108  123852 main.go:141] libmachine: (newest-cni-516412) DBG | I1006 01:41:58.140022  123887 retry.go:31] will retry after 1.875578095s: waiting for machine to come up
	I1006 01:42:00.016991  123852 main.go:141] libmachine: (newest-cni-516412) DBG | domain newest-cni-516412 has defined MAC address 52:54:00:3b:3a:c7 in network mk-newest-cni-516412
	I1006 01:42:00.017509  123852 main.go:141] libmachine: (newest-cni-516412) DBG | unable to find current IP address of domain newest-cni-516412 in network mk-newest-cni-516412
	I1006 01:42:00.017536  123852 main.go:141] libmachine: (newest-cni-516412) DBG | I1006 01:42:00.017463  123887 retry.go:31] will retry after 2.074083666s: waiting for machine to come up
	I1006 01:42:02.094132  123852 main.go:141] libmachine: (newest-cni-516412) DBG | domain newest-cni-516412 has defined MAC address 52:54:00:3b:3a:c7 in network mk-newest-cni-516412
	I1006 01:42:02.094684  123852 main.go:141] libmachine: (newest-cni-516412) DBG | unable to find current IP address of domain newest-cni-516412 in network mk-newest-cni-516412
	I1006 01:42:02.094718  123852 main.go:141] libmachine: (newest-cni-516412) DBG | I1006 01:42:02.094624  123887 retry.go:31] will retry after 2.961626275s: waiting for machine to come up
	I1006 01:42:05.059486  123852 main.go:141] libmachine: (newest-cni-516412) DBG | domain newest-cni-516412 has defined MAC address 52:54:00:3b:3a:c7 in network mk-newest-cni-516412
	I1006 01:42:05.059985  123852 main.go:141] libmachine: (newest-cni-516412) DBG | unable to find current IP address of domain newest-cni-516412 in network mk-newest-cni-516412
	I1006 01:42:05.060020  123852 main.go:141] libmachine: (newest-cni-516412) DBG | I1006 01:42:05.059914  123887 retry.go:31] will retry after 3.586008077s: waiting for machine to come up
	I1006 01:42:05.147576  120254 system_pods.go:86] 5 kube-system pods found
	I1006 01:42:05.147612  120254 system_pods.go:89] "coredns-5644d7b6d9-4pfcz" [56d0d597-ff05-4887-9112-6509320988bb] Running
	I1006 01:42:05.147622  120254 system_pods.go:89] "kube-controller-manager-old-k8s-version-456697" [4fb9c67b-ceae-4ef3-b74e-7e36ca9b5984] Running
	I1006 01:42:05.147628  120254 system_pods.go:89] "kube-proxy-9h6k5" [4302798d-698e-435d-bdb3-ff2d185bfd97] Running
	I1006 01:42:05.147638  120254 system_pods.go:89] "metrics-server-74d5856cc6-72rfp" [96e0ee39-d033-4749-94de-7dc5895a0ba1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1006 01:42:05.147648  120254 system_pods.go:89] "storage-provisioner" [ba75e9ab-d81f-4495-98e1-1ac980f95b9b] Running
	I1006 01:42:05.147672  120254 retry.go:31] will retry after 12.82834461s: missing components: etcd, kube-apiserver, kube-scheduler
	I1006 01:42:08.647518  123852 main.go:141] libmachine: (newest-cni-516412) DBG | domain newest-cni-516412 has defined MAC address 52:54:00:3b:3a:c7 in network mk-newest-cni-516412
	I1006 01:42:08.648077  123852 main.go:141] libmachine: (newest-cni-516412) Found IP for machine: 192.168.61.107
	I1006 01:42:08.648099  123852 main.go:141] libmachine: (newest-cni-516412) Reserving static IP address...
	I1006 01:42:08.648126  123852 main.go:141] libmachine: (newest-cni-516412) DBG | domain newest-cni-516412 has current primary IP address 192.168.61.107 and MAC address 52:54:00:3b:3a:c7 in network mk-newest-cni-516412
	I1006 01:42:08.648540  123852 main.go:141] libmachine: (newest-cni-516412) DBG | found host DHCP lease matching {name: "newest-cni-516412", mac: "52:54:00:3b:3a:c7", ip: "192.168.61.107"} in network mk-newest-cni-516412: {Iface:virbr3 ExpiryTime:2023-10-06 02:42:02 +0000 UTC Type:0 Mac:52:54:00:3b:3a:c7 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:newest-cni-516412 Clientid:01:52:54:00:3b:3a:c7}
	I1006 01:42:08.648566  123852 main.go:141] libmachine: (newest-cni-516412) Reserved static IP address: 192.168.61.107
	I1006 01:42:08.648576  123852 main.go:141] libmachine: (newest-cni-516412) DBG | skip adding static IP to network mk-newest-cni-516412 - found existing host DHCP lease matching {name: "newest-cni-516412", mac: "52:54:00:3b:3a:c7", ip: "192.168.61.107"}
	I1006 01:42:08.648588  123852 main.go:141] libmachine: (newest-cni-516412) DBG | Getting to WaitForSSH function...
	I1006 01:42:08.648604  123852 main.go:141] libmachine: (newest-cni-516412) Waiting for SSH to be available...
	I1006 01:42:08.650949  123852 main.go:141] libmachine: (newest-cni-516412) DBG | domain newest-cni-516412 has defined MAC address 52:54:00:3b:3a:c7 in network mk-newest-cni-516412
	I1006 01:42:08.651298  123852 main.go:141] libmachine: (newest-cni-516412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:3a:c7", ip: ""} in network mk-newest-cni-516412: {Iface:virbr3 ExpiryTime:2023-10-06 02:42:02 +0000 UTC Type:0 Mac:52:54:00:3b:3a:c7 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:newest-cni-516412 Clientid:01:52:54:00:3b:3a:c7}
	I1006 01:42:08.651328  123852 main.go:141] libmachine: (newest-cni-516412) DBG | domain newest-cni-516412 has defined IP address 192.168.61.107 and MAC address 52:54:00:3b:3a:c7 in network mk-newest-cni-516412
	I1006 01:42:08.651484  123852 main.go:141] libmachine: (newest-cni-516412) DBG | Using SSH client type: external
	I1006 01:42:08.651512  123852 main.go:141] libmachine: (newest-cni-516412) DBG | Using SSH private key: /home/jenkins/minikube-integration/17314-68418/.minikube/machines/newest-cni-516412/id_rsa (-rw-------)
	I1006 01:42:08.651541  123852 main.go:141] libmachine: (newest-cni-516412) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.107 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17314-68418/.minikube/machines/newest-cni-516412/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1006 01:42:08.651558  123852 main.go:141] libmachine: (newest-cni-516412) DBG | About to run SSH command:
	I1006 01:42:08.651578  123852 main.go:141] libmachine: (newest-cni-516412) DBG | exit 0
	I1006 01:42:08.746233  123852 main.go:141] libmachine: (newest-cni-516412) DBG | SSH cmd err, output: <nil>: 
	I1006 01:42:08.746661  123852 main.go:141] libmachine: (newest-cni-516412) Calling .GetConfigRaw
	I1006 01:42:08.747397  123852 main.go:141] libmachine: (newest-cni-516412) Calling .GetIP
	I1006 01:42:08.749960  123852 main.go:141] libmachine: (newest-cni-516412) DBG | domain newest-cni-516412 has defined MAC address 52:54:00:3b:3a:c7 in network mk-newest-cni-516412
	I1006 01:42:08.750356  123852 main.go:141] libmachine: (newest-cni-516412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:3a:c7", ip: ""} in network mk-newest-cni-516412: {Iface:virbr3 ExpiryTime:2023-10-06 02:42:02 +0000 UTC Type:0 Mac:52:54:00:3b:3a:c7 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:newest-cni-516412 Clientid:01:52:54:00:3b:3a:c7}
	I1006 01:42:08.750388  123852 main.go:141] libmachine: (newest-cni-516412) DBG | domain newest-cni-516412 has defined IP address 192.168.61.107 and MAC address 52:54:00:3b:3a:c7 in network mk-newest-cni-516412
	I1006 01:42:08.750645  123852 profile.go:148] Saving config to /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/newest-cni-516412/config.json ...
	I1006 01:42:08.750830  123852 machine.go:88] provisioning docker machine ...
	I1006 01:42:08.750849  123852 main.go:141] libmachine: (newest-cni-516412) Calling .DriverName
	I1006 01:42:08.751058  123852 main.go:141] libmachine: (newest-cni-516412) Calling .GetMachineName
	I1006 01:42:08.751272  123852 buildroot.go:166] provisioning hostname "newest-cni-516412"
	I1006 01:42:08.751294  123852 main.go:141] libmachine: (newest-cni-516412) Calling .GetMachineName
	I1006 01:42:08.751476  123852 main.go:141] libmachine: (newest-cni-516412) Calling .GetSSHHostname
	I1006 01:42:08.753761  123852 main.go:141] libmachine: (newest-cni-516412) DBG | domain newest-cni-516412 has defined MAC address 52:54:00:3b:3a:c7 in network mk-newest-cni-516412
	I1006 01:42:08.754100  123852 main.go:141] libmachine: (newest-cni-516412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:3a:c7", ip: ""} in network mk-newest-cni-516412: {Iface:virbr3 ExpiryTime:2023-10-06 02:42:02 +0000 UTC Type:0 Mac:52:54:00:3b:3a:c7 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:newest-cni-516412 Clientid:01:52:54:00:3b:3a:c7}
	I1006 01:42:08.754136  123852 main.go:141] libmachine: (newest-cni-516412) DBG | domain newest-cni-516412 has defined IP address 192.168.61.107 and MAC address 52:54:00:3b:3a:c7 in network mk-newest-cni-516412
	I1006 01:42:08.754215  123852 main.go:141] libmachine: (newest-cni-516412) Calling .GetSSHPort
	I1006 01:42:08.754382  123852 main.go:141] libmachine: (newest-cni-516412) Calling .GetSSHKeyPath
	I1006 01:42:08.754609  123852 main.go:141] libmachine: (newest-cni-516412) Calling .GetSSHKeyPath
	I1006 01:42:08.754756  123852 main.go:141] libmachine: (newest-cni-516412) Calling .GetSSHUsername
	I1006 01:42:08.754933  123852 main.go:141] libmachine: Using SSH client type: native
	I1006 01:42:08.755273  123852 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I1006 01:42:08.755292  123852 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-516412 && echo "newest-cni-516412" | sudo tee /etc/hostname
	I1006 01:42:08.895019  123852 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-516412
	
	I1006 01:42:08.895060  123852 main.go:141] libmachine: (newest-cni-516412) Calling .GetSSHHostname
	I1006 01:42:08.898016  123852 main.go:141] libmachine: (newest-cni-516412) DBG | domain newest-cni-516412 has defined MAC address 52:54:00:3b:3a:c7 in network mk-newest-cni-516412
	I1006 01:42:08.898376  123852 main.go:141] libmachine: (newest-cni-516412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:3a:c7", ip: ""} in network mk-newest-cni-516412: {Iface:virbr3 ExpiryTime:2023-10-06 02:42:02 +0000 UTC Type:0 Mac:52:54:00:3b:3a:c7 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:newest-cni-516412 Clientid:01:52:54:00:3b:3a:c7}
	I1006 01:42:08.898423  123852 main.go:141] libmachine: (newest-cni-516412) DBG | domain newest-cni-516412 has defined IP address 192.168.61.107 and MAC address 52:54:00:3b:3a:c7 in network mk-newest-cni-516412
	I1006 01:42:08.898614  123852 main.go:141] libmachine: (newest-cni-516412) Calling .GetSSHPort
	I1006 01:42:08.898820  123852 main.go:141] libmachine: (newest-cni-516412) Calling .GetSSHKeyPath
	I1006 01:42:08.899001  123852 main.go:141] libmachine: (newest-cni-516412) Calling .GetSSHKeyPath
	I1006 01:42:08.899162  123852 main.go:141] libmachine: (newest-cni-516412) Calling .GetSSHUsername
	I1006 01:42:08.899342  123852 main.go:141] libmachine: Using SSH client type: native
	I1006 01:42:08.899652  123852 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I1006 01:42:08.899672  123852 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-516412' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-516412/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-516412' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 01:42:09.034896  123852 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 01:42:09.034929  123852 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17314-68418/.minikube CaCertPath:/home/jenkins/minikube-integration/17314-68418/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17314-68418/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17314-68418/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17314-68418/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17314-68418/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17314-68418/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17314-68418/.minikube}
	I1006 01:42:09.034959  123852 buildroot.go:174] setting up certificates
	I1006 01:42:09.034980  123852 provision.go:83] configureAuth start
	I1006 01:42:09.035000  123852 main.go:141] libmachine: (newest-cni-516412) Calling .GetMachineName
	I1006 01:42:09.035299  123852 main.go:141] libmachine: (newest-cni-516412) Calling .GetIP
	I1006 01:42:09.037964  123852 main.go:141] libmachine: (newest-cni-516412) DBG | domain newest-cni-516412 has defined MAC address 52:54:00:3b:3a:c7 in network mk-newest-cni-516412
	I1006 01:42:09.038344  123852 main.go:141] libmachine: (newest-cni-516412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:3a:c7", ip: ""} in network mk-newest-cni-516412: {Iface:virbr3 ExpiryTime:2023-10-06 02:42:02 +0000 UTC Type:0 Mac:52:54:00:3b:3a:c7 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:newest-cni-516412 Clientid:01:52:54:00:3b:3a:c7}
	I1006 01:42:09.038387  123852 main.go:141] libmachine: (newest-cni-516412) DBG | domain newest-cni-516412 has defined IP address 192.168.61.107 and MAC address 52:54:00:3b:3a:c7 in network mk-newest-cni-516412
	I1006 01:42:09.038544  123852 main.go:141] libmachine: (newest-cni-516412) Calling .GetSSHHostname
	I1006 01:42:09.040879  123852 main.go:141] libmachine: (newest-cni-516412) DBG | domain newest-cni-516412 has defined MAC address 52:54:00:3b:3a:c7 in network mk-newest-cni-516412
	I1006 01:42:09.041287  123852 main.go:141] libmachine: (newest-cni-516412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:3a:c7", ip: ""} in network mk-newest-cni-516412: {Iface:virbr3 ExpiryTime:2023-10-06 02:42:02 +0000 UTC Type:0 Mac:52:54:00:3b:3a:c7 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:newest-cni-516412 Clientid:01:52:54:00:3b:3a:c7}
	I1006 01:42:09.041331  123852 main.go:141] libmachine: (newest-cni-516412) DBG | domain newest-cni-516412 has defined IP address 192.168.61.107 and MAC address 52:54:00:3b:3a:c7 in network mk-newest-cni-516412
	I1006 01:42:09.041483  123852 provision.go:138] copyHostCerts
	I1006 01:42:09.041542  123852 exec_runner.go:144] found /home/jenkins/minikube-integration/17314-68418/.minikube/ca.pem, removing ...
	I1006 01:42:09.041559  123852 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17314-68418/.minikube/ca.pem
	I1006 01:42:09.041619  123852 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17314-68418/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17314-68418/.minikube/ca.pem (1082 bytes)
	I1006 01:42:09.041718  123852 exec_runner.go:144] found /home/jenkins/minikube-integration/17314-68418/.minikube/cert.pem, removing ...
	I1006 01:42:09.041727  123852 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17314-68418/.minikube/cert.pem
	I1006 01:42:09.041753  123852 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17314-68418/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17314-68418/.minikube/cert.pem (1123 bytes)
	I1006 01:42:09.041813  123852 exec_runner.go:144] found /home/jenkins/minikube-integration/17314-68418/.minikube/key.pem, removing ...
	I1006 01:42:09.041820  123852 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17314-68418/.minikube/key.pem
	I1006 01:42:09.041840  123852 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17314-68418/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17314-68418/.minikube/key.pem (1679 bytes)
	I1006 01:42:09.041901  123852 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17314-68418/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17314-68418/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17314-68418/.minikube/certs/ca-key.pem org=jenkins.newest-cni-516412 san=[192.168.61.107 192.168.61.107 localhost 127.0.0.1 minikube newest-cni-516412]
	I1006 01:42:09.306154  123852 provision.go:172] copyRemoteCerts
	I1006 01:42:09.306247  123852 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 01:42:09.306284  123852 main.go:141] libmachine: (newest-cni-516412) Calling .GetSSHHostname
	I1006 01:42:09.309253  123852 main.go:141] libmachine: (newest-cni-516412) DBG | domain newest-cni-516412 has defined MAC address 52:54:00:3b:3a:c7 in network mk-newest-cni-516412
	I1006 01:42:09.309600  123852 main.go:141] libmachine: (newest-cni-516412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:3a:c7", ip: ""} in network mk-newest-cni-516412: {Iface:virbr3 ExpiryTime:2023-10-06 02:42:02 +0000 UTC Type:0 Mac:52:54:00:3b:3a:c7 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:newest-cni-516412 Clientid:01:52:54:00:3b:3a:c7}
	I1006 01:42:09.309639  123852 main.go:141] libmachine: (newest-cni-516412) DBG | domain newest-cni-516412 has defined IP address 192.168.61.107 and MAC address 52:54:00:3b:3a:c7 in network mk-newest-cni-516412
	I1006 01:42:09.309785  123852 main.go:141] libmachine: (newest-cni-516412) Calling .GetSSHPort
	I1006 01:42:09.310050  123852 main.go:141] libmachine: (newest-cni-516412) Calling .GetSSHKeyPath
	I1006 01:42:09.310230  123852 main.go:141] libmachine: (newest-cni-516412) Calling .GetSSHUsername
	I1006 01:42:09.310374  123852 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17314-68418/.minikube/machines/newest-cni-516412/id_rsa Username:docker}
	I1006 01:42:09.404006  123852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-68418/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1006 01:42:09.427205  123852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-68418/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1006 01:42:09.449764  123852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-68418/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1006 01:42:09.471334  123852 provision.go:86] duration metric: configureAuth took 436.323643ms
	I1006 01:42:09.471364  123852 buildroot.go:189] setting minikube options for container-runtime
	I1006 01:42:09.471554  123852 config.go:182] Loaded profile config "newest-cni-516412": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1006 01:42:09.471609  123852 main.go:141] libmachine: (newest-cni-516412) Calling .DriverName
	I1006 01:42:09.471951  123852 main.go:141] libmachine: (newest-cni-516412) Calling .GetSSHHostname
	I1006 01:42:09.474560  123852 main.go:141] libmachine: (newest-cni-516412) DBG | domain newest-cni-516412 has defined MAC address 52:54:00:3b:3a:c7 in network mk-newest-cni-516412
	I1006 01:42:09.474903  123852 main.go:141] libmachine: (newest-cni-516412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:3a:c7", ip: ""} in network mk-newest-cni-516412: {Iface:virbr3 ExpiryTime:2023-10-06 02:42:02 +0000 UTC Type:0 Mac:52:54:00:3b:3a:c7 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:newest-cni-516412 Clientid:01:52:54:00:3b:3a:c7}
	I1006 01:42:09.474951  123852 main.go:141] libmachine: (newest-cni-516412) DBG | domain newest-cni-516412 has defined IP address 192.168.61.107 and MAC address 52:54:00:3b:3a:c7 in network mk-newest-cni-516412
	I1006 01:42:09.475044  123852 main.go:141] libmachine: (newest-cni-516412) Calling .GetSSHPort
	I1006 01:42:09.475239  123852 main.go:141] libmachine: (newest-cni-516412) Calling .GetSSHKeyPath
	I1006 01:42:09.475421  123852 main.go:141] libmachine: (newest-cni-516412) Calling .GetSSHKeyPath
	I1006 01:42:09.475695  123852 main.go:141] libmachine: (newest-cni-516412) Calling .GetSSHUsername
	I1006 01:42:09.475884  123852 main.go:141] libmachine: Using SSH client type: native
	I1006 01:42:09.476231  123852 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I1006 01:42:09.476249  123852 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1006 01:42:09.603969  123852 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1006 01:42:09.603995  123852 buildroot.go:70] root file system type: tmpfs
	I1006 01:42:09.604126  123852 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1006 01:42:09.604158  123852 main.go:141] libmachine: (newest-cni-516412) Calling .GetSSHHostname
	I1006 01:42:09.607133  123852 main.go:141] libmachine: (newest-cni-516412) DBG | domain newest-cni-516412 has defined MAC address 52:54:00:3b:3a:c7 in network mk-newest-cni-516412
	I1006 01:42:09.607526  123852 main.go:141] libmachine: (newest-cni-516412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:3a:c7", ip: ""} in network mk-newest-cni-516412: {Iface:virbr3 ExpiryTime:2023-10-06 02:42:02 +0000 UTC Type:0 Mac:52:54:00:3b:3a:c7 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:newest-cni-516412 Clientid:01:52:54:00:3b:3a:c7}
	I1006 01:42:09.607561  123852 main.go:141] libmachine: (newest-cni-516412) DBG | domain newest-cni-516412 has defined IP address 192.168.61.107 and MAC address 52:54:00:3b:3a:c7 in network mk-newest-cni-516412
	I1006 01:42:09.607744  123852 main.go:141] libmachine: (newest-cni-516412) Calling .GetSSHPort
	I1006 01:42:09.607959  123852 main.go:141] libmachine: (newest-cni-516412) Calling .GetSSHKeyPath
	I1006 01:42:09.608150  123852 main.go:141] libmachine: (newest-cni-516412) Calling .GetSSHKeyPath
	I1006 01:42:09.608329  123852 main.go:141] libmachine: (newest-cni-516412) Calling .GetSSHUsername
	I1006 01:42:09.608523  123852 main.go:141] libmachine: Using SSH client type: native
	I1006 01:42:09.608967  123852 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I1006 01:42:09.609058  123852 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1006 01:42:09.746567  123852 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1006 01:42:09.746608  123852 main.go:141] libmachine: (newest-cni-516412) Calling .GetSSHHostname
	I1006 01:42:09.749522  123852 main.go:141] libmachine: (newest-cni-516412) DBG | domain newest-cni-516412 has defined MAC address 52:54:00:3b:3a:c7 in network mk-newest-cni-516412
	I1006 01:42:09.749888  123852 main.go:141] libmachine: (newest-cni-516412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:3a:c7", ip: ""} in network mk-newest-cni-516412: {Iface:virbr3 ExpiryTime:2023-10-06 02:42:02 +0000 UTC Type:0 Mac:52:54:00:3b:3a:c7 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:newest-cni-516412 Clientid:01:52:54:00:3b:3a:c7}
	I1006 01:42:09.749927  123852 main.go:141] libmachine: (newest-cni-516412) DBG | domain newest-cni-516412 has defined IP address 192.168.61.107 and MAC address 52:54:00:3b:3a:c7 in network mk-newest-cni-516412
	I1006 01:42:09.750132  123852 main.go:141] libmachine: (newest-cni-516412) Calling .GetSSHPort
	I1006 01:42:09.750353  123852 main.go:141] libmachine: (newest-cni-516412) Calling .GetSSHKeyPath
	I1006 01:42:09.750559  123852 main.go:141] libmachine: (newest-cni-516412) Calling .GetSSHKeyPath
	I1006 01:42:09.750718  123852 main.go:141] libmachine: (newest-cni-516412) Calling .GetSSHUsername
	I1006 01:42:09.750909  123852 main.go:141] libmachine: Using SSH client type: native
	I1006 01:42:09.751222  123852 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I1006 01:42:09.751242  123852 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1006 01:42:10.620974  123852 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1006 01:42:10.621005  123852 machine.go:91] provisioned docker machine in 1.870160304s
	I1006 01:42:10.621017  123852 start.go:300] post-start starting for "newest-cni-516412" (driver="kvm2")
	I1006 01:42:10.621027  123852 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 01:42:10.621043  123852 main.go:141] libmachine: (newest-cni-516412) Calling .DriverName
	I1006 01:42:10.621497  123852 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 01:42:10.621538  123852 main.go:141] libmachine: (newest-cni-516412) Calling .GetSSHHostname
	I1006 01:42:10.624393  123852 main.go:141] libmachine: (newest-cni-516412) DBG | domain newest-cni-516412 has defined MAC address 52:54:00:3b:3a:c7 in network mk-newest-cni-516412
	I1006 01:42:10.624800  123852 main.go:141] libmachine: (newest-cni-516412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:3a:c7", ip: ""} in network mk-newest-cni-516412: {Iface:virbr3 ExpiryTime:2023-10-06 02:42:02 +0000 UTC Type:0 Mac:52:54:00:3b:3a:c7 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:newest-cni-516412 Clientid:01:52:54:00:3b:3a:c7}
	I1006 01:42:10.624832  123852 main.go:141] libmachine: (newest-cni-516412) DBG | domain newest-cni-516412 has defined IP address 192.168.61.107 and MAC address 52:54:00:3b:3a:c7 in network mk-newest-cni-516412
	I1006 01:42:10.624951  123852 main.go:141] libmachine: (newest-cni-516412) Calling .GetSSHPort
	I1006 01:42:10.625152  123852 main.go:141] libmachine: (newest-cni-516412) Calling .GetSSHKeyPath
	I1006 01:42:10.625347  123852 main.go:141] libmachine: (newest-cni-516412) Calling .GetSSHUsername
	I1006 01:42:10.625510  123852 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17314-68418/.minikube/machines/newest-cni-516412/id_rsa Username:docker}
	I1006 01:42:10.720263  123852 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 01:42:10.724199  123852 info.go:137] Remote host: Buildroot 2021.02.12
	I1006 01:42:10.724231  123852 filesync.go:126] Scanning /home/jenkins/minikube-integration/17314-68418/.minikube/addons for local assets ...
	I1006 01:42:10.724316  123852 filesync.go:126] Scanning /home/jenkins/minikube-integration/17314-68418/.minikube/files for local assets ...
	I1006 01:42:10.724426  123852 filesync.go:149] local asset: /home/jenkins/minikube-integration/17314-68418/.minikube/files/etc/ssl/certs/755962.pem -> 755962.pem in /etc/ssl/certs
	I1006 01:42:10.724544  123852 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1006 01:42:10.733282  123852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-68418/.minikube/files/etc/ssl/certs/755962.pem --> /etc/ssl/certs/755962.pem (1708 bytes)
	I1006 01:42:10.756004  123852 start.go:303] post-start completed in 134.971327ms
	I1006 01:42:10.756037  123852 fix.go:56] fixHost completed within 20.381420328s
	I1006 01:42:10.756062  123852 main.go:141] libmachine: (newest-cni-516412) Calling .GetSSHHostname
	I1006 01:42:10.758534  123852 main.go:141] libmachine: (newest-cni-516412) DBG | domain newest-cni-516412 has defined MAC address 52:54:00:3b:3a:c7 in network mk-newest-cni-516412
	I1006 01:42:10.758876  123852 main.go:141] libmachine: (newest-cni-516412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:3a:c7", ip: ""} in network mk-newest-cni-516412: {Iface:virbr3 ExpiryTime:2023-10-06 02:42:02 +0000 UTC Type:0 Mac:52:54:00:3b:3a:c7 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:newest-cni-516412 Clientid:01:52:54:00:3b:3a:c7}
	I1006 01:42:10.758903  123852 main.go:141] libmachine: (newest-cni-516412) DBG | domain newest-cni-516412 has defined IP address 192.168.61.107 and MAC address 52:54:00:3b:3a:c7 in network mk-newest-cni-516412
	I1006 01:42:10.759051  123852 main.go:141] libmachine: (newest-cni-516412) Calling .GetSSHPort
	I1006 01:42:10.759274  123852 main.go:141] libmachine: (newest-cni-516412) Calling .GetSSHKeyPath
	I1006 01:42:10.759479  123852 main.go:141] libmachine: (newest-cni-516412) Calling .GetSSHKeyPath
	I1006 01:42:10.759617  123852 main.go:141] libmachine: (newest-cni-516412) Calling .GetSSHUsername
	I1006 01:42:10.759768  123852 main.go:141] libmachine: Using SSH client type: native
	I1006 01:42:10.760115  123852 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I1006 01:42:10.760129  123852 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1006 01:42:10.887063  123852 main.go:141] libmachine: SSH cmd err, output: <nil>: 1696556530.838171548
	
	I1006 01:42:10.887089  123852 fix.go:206] guest clock: 1696556530.838171548
	I1006 01:42:10.887097  123852 fix.go:219] Guest: 2023-10-06 01:42:10.838171548 +0000 UTC Remote: 2023-10-06 01:42:10.756041405 +0000 UTC m=+20.540611014 (delta=82.130143ms)
	I1006 01:42:10.887132  123852 fix.go:190] guest clock delta is within tolerance: 82.130143ms
	I1006 01:42:10.887137  123852 start.go:83] releasing machines lock for "newest-cni-516412", held for 20.51254097s
	I1006 01:42:10.887158  123852 main.go:141] libmachine: (newest-cni-516412) Calling .DriverName
	I1006 01:42:10.887389  123852 main.go:141] libmachine: (newest-cni-516412) Calling .GetIP
	I1006 01:42:10.890140  123852 main.go:141] libmachine: (newest-cni-516412) DBG | domain newest-cni-516412 has defined MAC address 52:54:00:3b:3a:c7 in network mk-newest-cni-516412
	I1006 01:42:10.890549  123852 main.go:141] libmachine: (newest-cni-516412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:3a:c7", ip: ""} in network mk-newest-cni-516412: {Iface:virbr3 ExpiryTime:2023-10-06 02:42:02 +0000 UTC Type:0 Mac:52:54:00:3b:3a:c7 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:newest-cni-516412 Clientid:01:52:54:00:3b:3a:c7}
	I1006 01:42:10.890589  123852 main.go:141] libmachine: (newest-cni-516412) DBG | domain newest-cni-516412 has defined IP address 192.168.61.107 and MAC address 52:54:00:3b:3a:c7 in network mk-newest-cni-516412
	I1006 01:42:10.890775  123852 main.go:141] libmachine: (newest-cni-516412) Calling .DriverName
	I1006 01:42:10.891304  123852 main.go:141] libmachine: (newest-cni-516412) Calling .DriverName
	I1006 01:42:10.891522  123852 main.go:141] libmachine: (newest-cni-516412) Calling .DriverName
	I1006 01:42:10.891629  123852 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 01:42:10.891680  123852 main.go:141] libmachine: (newest-cni-516412) Calling .GetSSHHostname
	I1006 01:42:10.891829  123852 ssh_runner.go:195] Run: cat /version.json
	I1006 01:42:10.891854  123852 main.go:141] libmachine: (newest-cni-516412) Calling .GetSSHHostname
	I1006 01:42:10.894105  123852 main.go:141] libmachine: (newest-cni-516412) DBG | domain newest-cni-516412 has defined MAC address 52:54:00:3b:3a:c7 in network mk-newest-cni-516412
	I1006 01:42:10.894419  123852 main.go:141] libmachine: (newest-cni-516412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:3a:c7", ip: ""} in network mk-newest-cni-516412: {Iface:virbr3 ExpiryTime:2023-10-06 02:42:02 +0000 UTC Type:0 Mac:52:54:00:3b:3a:c7 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:newest-cni-516412 Clientid:01:52:54:00:3b:3a:c7}
	I1006 01:42:10.894447  123852 main.go:141] libmachine: (newest-cni-516412) DBG | domain newest-cni-516412 has defined IP address 192.168.61.107 and MAC address 52:54:00:3b:3a:c7 in network mk-newest-cni-516412
	I1006 01:42:10.894559  123852 main.go:141] libmachine: (newest-cni-516412) DBG | domain newest-cni-516412 has defined MAC address 52:54:00:3b:3a:c7 in network mk-newest-cni-516412
	I1006 01:42:10.894605  123852 main.go:141] libmachine: (newest-cni-516412) Calling .GetSSHPort
	I1006 01:42:10.894796  123852 main.go:141] libmachine: (newest-cni-516412) Calling .GetSSHKeyPath
	I1006 01:42:10.894976  123852 main.go:141] libmachine: (newest-cni-516412) Calling .GetSSHUsername
	I1006 01:42:10.894979  123852 main.go:141] libmachine: (newest-cni-516412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:3a:c7", ip: ""} in network mk-newest-cni-516412: {Iface:virbr3 ExpiryTime:2023-10-06 02:42:02 +0000 UTC Type:0 Mac:52:54:00:3b:3a:c7 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:newest-cni-516412 Clientid:01:52:54:00:3b:3a:c7}
	I1006 01:42:10.895046  123852 main.go:141] libmachine: (newest-cni-516412) DBG | domain newest-cni-516412 has defined IP address 192.168.61.107 and MAC address 52:54:00:3b:3a:c7 in network mk-newest-cni-516412
	I1006 01:42:10.895164  123852 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17314-68418/.minikube/machines/newest-cni-516412/id_rsa Username:docker}
	I1006 01:42:10.895212  123852 main.go:141] libmachine: (newest-cni-516412) Calling .GetSSHPort
	I1006 01:42:10.895360  123852 main.go:141] libmachine: (newest-cni-516412) Calling .GetSSHKeyPath
	I1006 01:42:10.895499  123852 main.go:141] libmachine: (newest-cni-516412) Calling .GetSSHUsername
	I1006 01:42:10.895644  123852 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17314-68418/.minikube/machines/newest-cni-516412/id_rsa Username:docker}
	I1006 01:42:10.983681  123852 ssh_runner.go:195] Run: systemctl --version
	I1006 01:42:11.007701  123852 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1006 01:42:11.013849  123852 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1006 01:42:11.013951  123852 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 01:42:11.028984  123852 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1006 01:42:11.029034  123852 start.go:472] detecting cgroup driver to use...
	I1006 01:42:11.029161  123852 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 01:42:11.049642  123852 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1006 01:42:11.059682  123852 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1006 01:42:11.069421  123852 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1006 01:42:11.069496  123852 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1006 01:42:11.079164  123852 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1006 01:42:11.088973  123852 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1006 01:42:11.098619  123852 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1006 01:42:11.109010  123852 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1006 01:42:11.120771  123852 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1006 01:42:11.130793  123852 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1006 01:42:11.139829  123852 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1006 01:42:11.148320  123852 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 01:42:11.252907  123852 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1006 01:42:11.269580  123852 start.go:472] detecting cgroup driver to use...
	I1006 01:42:11.269699  123852 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1006 01:42:11.283659  123852 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 01:42:11.296523  123852 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 01:42:11.315285  123852 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 01:42:11.328195  123852 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1006 01:42:11.340725  123852 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1006 01:42:11.376072  123852 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1006 01:42:11.388375  123852 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 01:42:11.405632  123852 ssh_runner.go:195] Run: which cri-dockerd
	I1006 01:42:11.409533  123852 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1006 01:42:11.417630  123852 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1006 01:42:11.433025  123852 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1006 01:42:11.535455  123852 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1006 01:42:11.656423  123852 docker.go:555] configuring docker to use "cgroupfs" as cgroup driver...
	I1006 01:42:11.656582  123852 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1006 01:42:11.673674  123852 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 01:42:11.786057  123852 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1006 01:42:13.206860  123852 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.420759171s)
	I1006 01:42:13.206948  123852 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1006 01:42:13.309971  123852 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1006 01:42:13.423294  123852 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1006 01:42:13.534473  123852 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 01:42:13.644655  123852 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1006 01:42:13.662038  123852 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 01:42:13.773691  123852 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1006 01:42:13.847541  123852 start.go:519] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1006 01:42:13.847647  123852 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1006 01:42:13.852917  123852 start.go:540] Will wait 60s for crictl version
	I1006 01:42:13.853011  123852 ssh_runner.go:195] Run: which crictl
	I1006 01:42:13.858075  123852 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1006 01:42:13.917616  123852 start.go:556] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1
	I1006 01:42:13.917707  123852 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1006 01:42:13.942652  123852 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1006 01:42:13.970139  123852 out.go:204] * Preparing Kubernetes v1.28.2 on Docker 24.0.6 ...
	I1006 01:42:13.970223  123852 main.go:141] libmachine: (newest-cni-516412) Calling .GetIP
	I1006 01:42:13.973075  123852 main.go:141] libmachine: (newest-cni-516412) DBG | domain newest-cni-516412 has defined MAC address 52:54:00:3b:3a:c7 in network mk-newest-cni-516412
	I1006 01:42:13.973475  123852 main.go:141] libmachine: (newest-cni-516412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:3a:c7", ip: ""} in network mk-newest-cni-516412: {Iface:virbr3 ExpiryTime:2023-10-06 02:42:02 +0000 UTC Type:0 Mac:52:54:00:3b:3a:c7 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:newest-cni-516412 Clientid:01:52:54:00:3b:3a:c7}
	I1006 01:42:13.973517  123852 main.go:141] libmachine: (newest-cni-516412) DBG | domain newest-cni-516412 has defined IP address 192.168.61.107 and MAC address 52:54:00:3b:3a:c7 in network mk-newest-cni-516412
	I1006 01:42:13.973665  123852 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1006 01:42:13.977466  123852 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 01:42:13.991681  123852 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1006 01:42:13.993297  123852 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1006 01:42:13.993397  123852 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1006 01:42:14.012674  123852 docker.go:689] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.28.2
	registry.k8s.io/kube-controller-manager:v1.28.2
	registry.k8s.io/kube-proxy:v1.28.2
	registry.k8s.io/kube-scheduler:v1.28.2
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1006 01:42:14.012710  123852 docker.go:619] Images already preloaded, skipping extraction
	I1006 01:42:14.012795  123852 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1006 01:42:14.031226  123852 docker.go:689] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.28.2
	registry.k8s.io/kube-controller-manager:v1.28.2
	registry.k8s.io/kube-scheduler:v1.28.2
	registry.k8s.io/kube-proxy:v1.28.2
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1006 01:42:14.031255  123852 cache_images.go:84] Images are preloaded, skipping loading
	I1006 01:42:14.031336  123852 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1006 01:42:14.056558  123852 cni.go:84] Creating CNI manager for ""
	I1006 01:42:14.056583  123852 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1006 01:42:14.056604  123852 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I1006 01:42:14.056621  123852 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.61.107 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-516412 NodeName:newest-cni-516412 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.107"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:
map[] NodeIP:192.168.61.107 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1006 01:42:14.056785  123852 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.107
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-516412"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.107
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.107"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1006 01:42:14.056887  123852 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-516412 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.107
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:newest-cni-516412 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1006 01:42:14.056966  123852 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1006 01:42:14.065996  123852 binaries.go:44] Found k8s binaries, skipping transfer
	I1006 01:42:14.066070  123852 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1006 01:42:14.073786  123852 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (417 bytes)
	I1006 01:42:14.088816  123852 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1006 01:42:14.104281  123852 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2230 bytes)
	I1006 01:42:14.120804  123852 ssh_runner.go:195] Run: grep 192.168.61.107	control-plane.minikube.internal$ /etc/hosts
	I1006 01:42:14.124435  123852 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.107	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 01:42:14.136468  123852 certs.go:56] Setting up /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/newest-cni-516412 for IP: 192.168.61.107
	I1006 01:42:14.136507  123852 certs.go:190] acquiring lock for shared ca certs: {Name:mk66b56b2a9e7637d1c9978f837006d0ac1bdbc0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 01:42:14.136697  123852 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17314-68418/.minikube/ca.key
	I1006 01:42:14.136780  123852 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17314-68418/.minikube/proxy-client-ca.key
	I1006 01:42:14.136887  123852 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/newest-cni-516412/client.key
	I1006 01:42:14.136975  123852 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/newest-cni-516412/apiserver.key.b339b16d
	I1006 01:42:14.137064  123852 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/newest-cni-516412/proxy-client.key
	I1006 01:42:14.137236  123852 certs.go:437] found cert: /home/jenkins/minikube-integration/17314-68418/.minikube/certs/home/jenkins/minikube-integration/17314-68418/.minikube/certs/75596.pem (1338 bytes)
	W1006 01:42:14.137279  123852 certs.go:433] ignoring /home/jenkins/minikube-integration/17314-68418/.minikube/certs/home/jenkins/minikube-integration/17314-68418/.minikube/certs/75596_empty.pem, impossibly tiny 0 bytes
	I1006 01:42:14.137297  123852 certs.go:437] found cert: /home/jenkins/minikube-integration/17314-68418/.minikube/certs/home/jenkins/minikube-integration/17314-68418/.minikube/certs/ca-key.pem (1679 bytes)
	I1006 01:42:14.137335  123852 certs.go:437] found cert: /home/jenkins/minikube-integration/17314-68418/.minikube/certs/home/jenkins/minikube-integration/17314-68418/.minikube/certs/ca.pem (1082 bytes)
	I1006 01:42:14.137379  123852 certs.go:437] found cert: /home/jenkins/minikube-integration/17314-68418/.minikube/certs/home/jenkins/minikube-integration/17314-68418/.minikube/certs/cert.pem (1123 bytes)
	I1006 01:42:14.137428  123852 certs.go:437] found cert: /home/jenkins/minikube-integration/17314-68418/.minikube/certs/home/jenkins/minikube-integration/17314-68418/.minikube/certs/key.pem (1679 bytes)
	I1006 01:42:14.137497  123852 certs.go:437] found cert: /home/jenkins/minikube-integration/17314-68418/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17314-68418/.minikube/files/etc/ssl/certs/755962.pem (1708 bytes)
	I1006 01:42:14.138196  123852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/newest-cni-516412/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1006 01:42:14.161438  123852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/newest-cni-516412/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1006 01:42:14.184386  123852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/newest-cni-516412/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1006 01:42:14.207007  123852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/newest-cni-516412/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1006 01:42:14.230402  123852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-68418/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1006 01:42:14.252544  123852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-68418/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1006 01:42:14.275183  123852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-68418/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1006 01:42:14.297954  123852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-68418/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1006 01:42:14.320266  123852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-68418/.minikube/certs/75596.pem --> /usr/share/ca-certificates/75596.pem (1338 bytes)
	I1006 01:42:14.343007  123852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-68418/.minikube/files/etc/ssl/certs/755962.pem --> /usr/share/ca-certificates/755962.pem (1708 bytes)
	I1006 01:42:14.365377  123852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-68418/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1006 01:42:14.387754  123852 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1006 01:42:14.403492  123852 ssh_runner.go:195] Run: openssl version
	I1006 01:42:14.408819  123852 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/75596.pem && ln -fs /usr/share/ca-certificates/75596.pem /etc/ssl/certs/75596.pem"
	I1006 01:42:14.418193  123852 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75596.pem
	I1006 01:42:14.422607  123852 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  6 00:49 /usr/share/ca-certificates/75596.pem
	I1006 01:42:14.422674  123852 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75596.pem
	I1006 01:42:14.428315  123852 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/75596.pem /etc/ssl/certs/51391683.0"
	I1006 01:42:14.437594  123852 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/755962.pem && ln -fs /usr/share/ca-certificates/755962.pem /etc/ssl/certs/755962.pem"
	I1006 01:42:14.447367  123852 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/755962.pem
	I1006 01:42:14.451784  123852 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  6 00:49 /usr/share/ca-certificates/755962.pem
	I1006 01:42:14.451867  123852 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/755962.pem
	I1006 01:42:14.457398  123852 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/755962.pem /etc/ssl/certs/3ec20f2e.0"
	I1006 01:42:14.466728  123852 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1006 01:42:14.475835  123852 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1006 01:42:14.480774  123852 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  6 00:45 /usr/share/ca-certificates/minikubeCA.pem
	I1006 01:42:14.480858  123852 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1006 01:42:14.486267  123852 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1006 01:42:14.497244  123852 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1006 01:42:14.502003  123852 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1006 01:42:14.508088  123852 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1006 01:42:14.514105  123852 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1006 01:42:14.519906  123852 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1006 01:42:14.525606  123852 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1006 01:42:14.531561  123852 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1006 01:42:14.537569  123852 kubeadm.go:404] StartCluster: {Name:newest-cni-516412 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.2 ClusterName:newest-cni-516412 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.107 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods
:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1006 01:42:14.537745  123852 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1006 01:42:14.556456  123852 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1006 01:42:14.565206  123852 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1006 01:42:14.565262  123852 kubeadm.go:636] restartCluster start
	I1006 01:42:14.565312  123852 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1006 01:42:14.573409  123852 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1006 01:42:14.574111  123852 kubeconfig.go:135] verify returned: extract IP: "newest-cni-516412" does not appear in /home/jenkins/minikube-integration/17314-68418/kubeconfig
	I1006 01:42:14.574383  123852 kubeconfig.go:146] "newest-cni-516412" context is missing from /home/jenkins/minikube-integration/17314-68418/kubeconfig - will repair!
	I1006 01:42:14.574971  123852 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17314-68418/kubeconfig: {Name:mk648f60cfb65e68b187383d1df4c36007f003a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 01:42:14.576478  123852 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1006 01:42:14.584944  123852 api_server.go:166] Checking apiserver status ...
	I1006 01:42:14.585008  123852 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1006 01:42:14.596053  123852 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1006 01:42:14.596075  123852 api_server.go:166] Checking apiserver status ...
	I1006 01:42:14.596132  123852 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1006 01:42:14.606979  123852 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1006 01:42:15.107798  123852 api_server.go:166] Checking apiserver status ...
	I1006 01:42:15.107908  123852 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1006 01:42:15.119807  123852 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1006 01:42:15.607811  123852 api_server.go:166] Checking apiserver status ...
	I1006 01:42:15.607923  123852 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1006 01:42:15.618921  123852 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1006 01:42:16.107427  123852 api_server.go:166] Checking apiserver status ...
	I1006 01:42:16.107550  123852 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1006 01:42:16.123439  123852 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1006 01:42:16.607706  123852 api_server.go:166] Checking apiserver status ...
	I1006 01:42:16.607807  123852 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1006 01:42:16.619278  123852 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1006 01:42:17.107877  123852 api_server.go:166] Checking apiserver status ...
	I1006 01:42:17.107989  123852 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1006 01:42:17.122189  123852 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1006 01:42:17.607826  123852 api_server.go:166] Checking apiserver status ...
	I1006 01:42:17.607950  123852 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1006 01:42:17.618662  123852 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1006 01:42:18.107351  123852 api_server.go:166] Checking apiserver status ...
	I1006 01:42:18.107427  123852 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1006 01:42:18.118779  123852 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1006 01:42:18.607997  123852 api_server.go:166] Checking apiserver status ...
	I1006 01:42:18.608120  123852 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1006 01:42:18.619678  123852 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1006 01:42:19.107193  123852 api_server.go:166] Checking apiserver status ...
	I1006 01:42:19.107307  123852 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1006 01:42:19.119507  123852 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1006 01:42:19.608107  123852 api_server.go:166] Checking apiserver status ...
	I1006 01:42:19.608227  123852 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1006 01:42:19.619481  123852 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1006 01:42:20.107644  123852 api_server.go:166] Checking apiserver status ...
	I1006 01:42:20.107755  123852 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1006 01:42:20.119163  123852 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1006 01:42:17.981987  120254 system_pods.go:86] 8 kube-system pods found
	I1006 01:42:17.982016  120254 system_pods.go:89] "coredns-5644d7b6d9-4pfcz" [56d0d597-ff05-4887-9112-6509320988bb] Running
	I1006 01:42:17.982021  120254 system_pods.go:89] "etcd-old-k8s-version-456697" [276fcd61-2966-4779-ae1c-8141f6d08b67] Pending
	I1006 01:42:17.982025  120254 system_pods.go:89] "kube-apiserver-old-k8s-version-456697" [b91fbdcd-ba58-4289-bdf7-44904e1b25db] Running
	I1006 01:42:17.982029  120254 system_pods.go:89] "kube-controller-manager-old-k8s-version-456697" [4fb9c67b-ceae-4ef3-b74e-7e36ca9b5984] Running
	I1006 01:42:17.982033  120254 system_pods.go:89] "kube-proxy-9h6k5" [4302798d-698e-435d-bdb3-ff2d185bfd97] Running
	I1006 01:42:17.982037  120254 system_pods.go:89] "kube-scheduler-old-k8s-version-456697" [9a7012e2-cd19-440a-90f7-214122982fb6] Running
	I1006 01:42:17.982044  120254 system_pods.go:89] "metrics-server-74d5856cc6-72rfp" [96e0ee39-d033-4749-94de-7dc5895a0ba1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1006 01:42:17.982050  120254 system_pods.go:89] "storage-provisioner" [ba75e9ab-d81f-4495-98e1-1ac980f95b9b] Running
	I1006 01:42:17.982067  120254 retry.go:31] will retry after 16.05401481s: missing components: etcd
	I1006 01:42:20.607702  123852 api_server.go:166] Checking apiserver status ...
	I1006 01:42:20.607813  123852 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1006 01:42:20.619003  123852 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1006 01:42:21.107311  123852 api_server.go:166] Checking apiserver status ...
	I1006 01:42:21.107406  123852 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1006 01:42:21.124700  123852 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1006 01:42:21.607194  123852 api_server.go:166] Checking apiserver status ...
	I1006 01:42:21.607314  123852 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1006 01:42:21.618093  123852 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1006 01:42:22.107756  123852 api_server.go:166] Checking apiserver status ...
	I1006 01:42:22.107847  123852 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1006 01:42:22.119660  123852 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1006 01:42:22.607195  123852 api_server.go:166] Checking apiserver status ...
	I1006 01:42:22.607289  123852 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1006 01:42:22.618648  123852 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1006 01:42:23.107211  123852 api_server.go:166] Checking apiserver status ...
	I1006 01:42:23.107301  123852 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1006 01:42:23.118462  123852 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1006 01:42:23.607699  123852 api_server.go:166] Checking apiserver status ...
	I1006 01:42:23.607798  123852 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1006 01:42:23.618885  123852 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1006 01:42:24.108043  123852 api_server.go:166] Checking apiserver status ...
	I1006 01:42:24.108153  123852 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1006 01:42:24.119282  123852 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1006 01:42:24.585064  123852 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1006 01:42:24.585104  123852 kubeadm.go:1128] stopping kube-system containers ...
	I1006 01:42:24.585180  123852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1006 01:42:24.605290  123852 docker.go:464] Stopping containers: [8bd395f729e5 2eefb18aacab da147598b7cf 8066541b1e4e 83c8bb538d2b 4962fb17b6a4 ff04ac648857 a2a5269f12d4 518f34e4f675 a6f1c9db0f18 d99468b41eb3 c170e8a6a8ca 4f4b70098684 3662b4f8b5b0 c8351425a131]
	I1006 01:42:24.605367  123852 ssh_runner.go:195] Run: docker stop 8bd395f729e5 2eefb18aacab da147598b7cf 8066541b1e4e 83c8bb538d2b 4962fb17b6a4 ff04ac648857 a2a5269f12d4 518f34e4f675 a6f1c9db0f18 d99468b41eb3 c170e8a6a8ca 4f4b70098684 3662b4f8b5b0 c8351425a131
	I1006 01:42:24.627253  123852 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1006 01:42:24.643054  123852 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1006 01:42:24.651056  123852 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1006 01:42:24.651121  123852 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1006 01:42:24.659073  123852 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1006 01:42:24.659103  123852 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1006 01:42:24.786709  123852 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1006 01:42:25.798587  123852 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.011832686s)
	I1006 01:42:25.798645  123852 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1006 01:42:25.982160  123852 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1006 01:42:26.083354  123852 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1006 01:42:26.151488  123852 api_server.go:52] waiting for apiserver process to appear ...
	I1006 01:42:26.151584  123852 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 01:42:26.165325  123852 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 01:42:26.681493  123852 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 01:42:27.181886  123852 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 01:42:27.681101  123852 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 01:42:28.181489  123852 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 01:42:28.680885  123852 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 01:42:28.697476  123852 api_server.go:72] duration metric: took 2.54598687s to wait for apiserver process to appear ...
	I1006 01:42:28.697507  123852 api_server.go:88] waiting for apiserver healthz status ...
	I1006 01:42:28.697528  123852 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I1006 01:42:28.698038  123852 api_server.go:269] stopped: https://192.168.61.107:8443/healthz: Get "https://192.168.61.107:8443/healthz": dial tcp 192.168.61.107:8443: connect: connection refused
	I1006 01:42:28.698072  123852 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I1006 01:42:28.698591  123852 api_server.go:269] stopped: https://192.168.61.107:8443/healthz: Get "https://192.168.61.107:8443/healthz": dial tcp 192.168.61.107:8443: connect: connection refused
	I1006 01:42:29.199338  123852 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I1006 01:42:31.900483  123852 api_server.go:279] https://192.168.61.107:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1006 01:42:31.900520  123852 api_server.go:103] status: https://192.168.61.107:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1006 01:42:31.900539  123852 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I1006 01:42:31.951429  123852 api_server.go:279] https://192.168.61.107:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1006 01:42:31.951488  123852 api_server.go:103] status: https://192.168.61.107:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1006 01:42:32.199689  123852 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I1006 01:42:32.205304  123852 api_server.go:279] https://192.168.61.107:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1006 01:42:32.205342  123852 api_server.go:103] status: https://192.168.61.107:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1006 01:42:32.699691  123852 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I1006 01:42:32.705364  123852 api_server.go:279] https://192.168.61.107:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1006 01:42:32.705402  123852 api_server.go:103] status: https://192.168.61.107:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1006 01:42:33.199738  123852 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I1006 01:42:33.205136  123852 api_server.go:279] https://192.168.61.107:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1006 01:42:33.205172  123852 api_server.go:103] status: https://192.168.61.107:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1006 01:42:33.698732  123852 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I1006 01:42:33.704547  123852 api_server.go:279] https://192.168.61.107:8443/healthz returned 200:
	ok
	I1006 01:42:33.713640  123852 api_server.go:141] control plane version: v1.28.2
	I1006 01:42:33.713671  123852 api_server.go:131] duration metric: took 5.016156634s to wait for apiserver health ...
	I1006 01:42:33.713684  123852 cni.go:84] Creating CNI manager for ""
	I1006 01:42:33.713701  123852 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1006 01:42:33.715455  123852 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1006 01:42:33.717420  123852 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1006 01:42:33.732296  123852 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1006 01:42:33.765007  123852 system_pods.go:43] waiting for kube-system pods to appear ...
	I1006 01:42:33.783985  123852 system_pods.go:59] 8 kube-system pods found
	I1006 01:42:33.784036  123852 system_pods.go:61] "coredns-5dd5756b68-chtwr" [47db9eff-1ea8-4208-8e1d-a5a379a226ea] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1006 01:42:33.784056  123852 system_pods.go:61] "etcd-newest-cni-516412" [37d0b87a-f92d-46ef-8b52-0978b969f77f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1006 01:42:33.784068  123852 system_pods.go:61] "kube-apiserver-newest-cni-516412" [607fba0a-3396-44ea-8ad1-f5aaa1bb72d3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1006 01:42:33.784081  123852 system_pods.go:61] "kube-controller-manager-newest-cni-516412" [9ec8626c-0c4b-4710-9f1d-c60e4e21c2ed] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1006 01:42:33.784089  123852 system_pods.go:61] "kube-proxy-68ldw" [8c0ddd2c-d561-4e3c-8fd0-f794e546389e] Running
	I1006 01:42:33.784127  123852 system_pods.go:61] "kube-scheduler-newest-cni-516412" [46f538ba-a327-4510-9879-c88916158d3f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1006 01:42:33.784145  123852 system_pods.go:61] "metrics-server-57f55c9bc5-pdjtx" [1e0410b4-4b16-44c4-a1a1-9f7c950257eb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1006 01:42:33.784155  123852 system_pods.go:61] "storage-provisioner" [219ca167-3b81-4c11-b9d0-1636bded191c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1006 01:42:33.784167  123852 system_pods.go:74] duration metric: took 19.137111ms to wait for pod list to return data ...
	I1006 01:42:33.784181  123852 node_conditions.go:102] verifying NodePressure condition ...
	I1006 01:42:33.789236  123852 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1006 01:42:33.789278  123852 node_conditions.go:123] node cpu capacity is 2
	I1006 01:42:33.789294  123852 node_conditions.go:105] duration metric: took 5.103628ms to run NodePressure ...
	I1006 01:42:33.789319  123852 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1006 01:42:34.047585  123852 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1006 01:42:34.059262  123852 ops.go:34] apiserver oom_adj: -16
	I1006 01:42:34.059300  123852 kubeadm.go:640] restartCluster took 19.494030763s
	I1006 01:42:34.059309  123852 kubeadm.go:406] StartCluster complete in 19.521752294s
	I1006 01:42:34.059340  123852 settings.go:142] acquiring lock: {Name:mk3b3087875cfd1c20f0795375bd034ac69cc92a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 01:42:34.059436  123852 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17314-68418/kubeconfig
	I1006 01:42:34.060435  123852 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17314-68418/kubeconfig: {Name:mk648f60cfb65e68b187383d1df4c36007f003a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 01:42:34.060713  123852 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1006 01:42:34.060816  123852 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1006 01:42:34.060945  123852 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-516412"
	I1006 01:42:34.060962  123852 addons.go:69] Setting dashboard=true in profile "newest-cni-516412"
	I1006 01:42:34.060974  123852 config.go:182] Loaded profile config "newest-cni-516412": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1006 01:42:34.060981  123852 addons.go:231] Setting addon dashboard=true in "newest-cni-516412"
	I1006 01:42:34.060985  123852 addons.go:231] Setting addon storage-provisioner=true in "newest-cni-516412"
	W1006 01:42:34.060993  123852 addons.go:240] addon storage-provisioner should already be in state true
	W1006 01:42:34.061004  123852 addons.go:240] addon dashboard should already be in state true
	I1006 01:42:34.061055  123852 host.go:66] Checking if "newest-cni-516412" exists ...
	I1006 01:42:34.061058  123852 host.go:66] Checking if "newest-cni-516412" exists ...
	I1006 01:42:34.061071  123852 cache.go:107] acquiring lock: {Name:mk4491440db9d5e8f0696864833789b1ab5d6c18 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 01:42:34.061143  123852 cache.go:115] /home/jenkins/minikube-integration/17314-68418/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 exists
	I1006 01:42:34.061153  123852 cache.go:96] cache image "gcr.io/k8s-minikube/gvisor-addon:2" -> "/home/jenkins/minikube-integration/17314-68418/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2" took 95.441µs
	I1006 01:42:34.061164  123852 cache.go:80] save to tar file gcr.io/k8s-minikube/gvisor-addon:2 -> /home/jenkins/minikube-integration/17314-68418/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 succeeded
	I1006 01:42:34.061172  123852 cache.go:87] Successfully saved all images to host disk.
	I1006 01:42:34.061264  123852 addons.go:69] Setting metrics-server=true in profile "newest-cni-516412"
	I1006 01:42:34.061304  123852 addons.go:231] Setting addon metrics-server=true in "newest-cni-516412"
	W1006 01:42:34.061316  123852 addons.go:240] addon metrics-server should already be in state true
	I1006 01:42:34.061370  123852 config.go:182] Loaded profile config "newest-cni-516412": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1006 01:42:34.061409  123852 host.go:66] Checking if "newest-cni-516412" exists ...
	I1006 01:42:34.061476  123852 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1006 01:42:34.061499  123852 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1006 01:42:34.061534  123852 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 01:42:34.061545  123852 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 01:42:34.061756  123852 addons.go:69] Setting default-storageclass=true in profile "newest-cni-516412"
	I1006 01:42:34.061793  123852 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-516412"
	I1006 01:42:34.061868  123852 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1006 01:42:34.061918  123852 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 01:42:34.061760  123852 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1006 01:42:34.062165  123852 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 01:42:34.062197  123852 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1006 01:42:34.062226  123852 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 01:42:34.072818  123852 kapi.go:248] "coredns" deployment in "kube-system" namespace and "newest-cni-516412" context rescaled to 1 replicas
	I1006 01:42:34.072870  123852 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.107 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1006 01:42:34.074714  123852 out.go:177] * Verifying Kubernetes components...
	I1006 01:42:34.076235  123852 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 01:42:34.081183  123852 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35145
	I1006 01:42:34.081399  123852 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37257
	I1006 01:42:34.081679  123852 main.go:141] libmachine: () Calling .GetVersion
	I1006 01:42:34.081820  123852 main.go:141] libmachine: () Calling .GetVersion
	I1006 01:42:34.082180  123852 main.go:141] libmachine: Using API Version  1
	I1006 01:42:34.082197  123852 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 01:42:34.082271  123852 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45269
	I1006 01:42:34.082344  123852 main.go:141] libmachine: Using API Version  1
	I1006 01:42:34.082373  123852 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 01:42:34.082801  123852 main.go:141] libmachine: () Calling .GetVersion
	I1006 01:42:34.082892  123852 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44155
	I1006 01:42:34.083070  123852 main.go:141] libmachine: () Calling .GetMachineName
	I1006 01:42:34.083312  123852 main.go:141] libmachine: (newest-cni-516412) Calling .GetState
	I1006 01:42:34.083472  123852 main.go:141] libmachine: Using API Version  1
	I1006 01:42:34.083484  123852 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 01:42:34.083546  123852 main.go:141] libmachine: () Calling .GetVersion
	I1006 01:42:34.083630  123852 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35663
	I1006 01:42:34.083775  123852 main.go:141] libmachine: () Calling .GetMachineName
	I1006 01:42:34.083929  123852 main.go:141] libmachine: (newest-cni-516412) Calling .GetState
	I1006 01:42:34.083950  123852 main.go:141] libmachine: () Calling .GetMachineName
	I1006 01:42:34.084224  123852 main.go:141] libmachine: Using API Version  1
	I1006 01:42:34.084242  123852 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 01:42:34.084312  123852 main.go:141] libmachine: () Calling .GetVersion
	I1006 01:42:34.084666  123852 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1006 01:42:34.084712  123852 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 01:42:34.084729  123852 main.go:141] libmachine: Using API Version  1
	I1006 01:42:34.084746  123852 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 01:42:34.084817  123852 main.go:141] libmachine: () Calling .GetMachineName
	I1006 01:42:34.085230  123852 main.go:141] libmachine: () Calling .GetMachineName
	I1006 01:42:34.085782  123852 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1006 01:42:34.085818  123852 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 01:42:34.087928  123852 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1006 01:42:34.087973  123852 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 01:42:34.088735  123852 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1006 01:42:34.088765  123852 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 01:42:34.092076  123852 addons.go:231] Setting addon default-storageclass=true in "newest-cni-516412"
	W1006 01:42:34.092101  123852 addons.go:240] addon default-storageclass should already be in state true
	I1006 01:42:34.092127  123852 host.go:66] Checking if "newest-cni-516412" exists ...
	I1006 01:42:34.092626  123852 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1006 01:42:34.092673  123852 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 01:42:34.105486  123852 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37573
	I1006 01:42:34.106123  123852 main.go:141] libmachine: () Calling .GetVersion
	I1006 01:42:34.106775  123852 main.go:141] libmachine: Using API Version  1
	I1006 01:42:34.106844  123852 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 01:42:34.107261  123852 main.go:141] libmachine: () Calling .GetMachineName
	I1006 01:42:34.107400  123852 main.go:141] libmachine: (newest-cni-516412) Calling .GetState
	I1006 01:42:34.107734  123852 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38349
	I1006 01:42:34.108258  123852 main.go:141] libmachine: () Calling .GetVersion
	I1006 01:42:34.108723  123852 main.go:141] libmachine: Using API Version  1
	I1006 01:42:34.108741  123852 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 01:42:34.109070  123852 main.go:141] libmachine: () Calling .GetMachineName
	I1006 01:42:34.109248  123852 main.go:141] libmachine: (newest-cni-516412) Calling .DriverName
	I1006 01:42:34.109279  123852 main.go:141] libmachine: (newest-cni-516412) Calling .DriverName
	I1006 01:42:34.111441  123852 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1006 01:42:34.109778  123852 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1006 01:42:34.111495  123852 main.go:141] libmachine: (newest-cni-516412) Calling .GetSSHHostname
	I1006 01:42:34.109882  123852 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39299
	I1006 01:42:34.112042  123852 main.go:141] libmachine: () Calling .GetVersion
	I1006 01:42:34.113121  123852 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1006 01:42:34.113139  123852 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1006 01:42:34.113161  123852 main.go:141] libmachine: (newest-cni-516412) Calling .GetSSHHostname
	I1006 01:42:34.113567  123852 main.go:141] libmachine: Using API Version  1
	I1006 01:42:34.113590  123852 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 01:42:34.113974  123852 main.go:141] libmachine: () Calling .GetMachineName
	I1006 01:42:34.114171  123852 main.go:141] libmachine: (newest-cni-516412) Calling .GetState
	I1006 01:42:34.115064  123852 main.go:141] libmachine: (newest-cni-516412) DBG | domain newest-cni-516412 has defined MAC address 52:54:00:3b:3a:c7 in network mk-newest-cni-516412
	I1006 01:42:34.116696  123852 main.go:141] libmachine: (newest-cni-516412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:3a:c7", ip: ""} in network mk-newest-cni-516412: {Iface:virbr3 ExpiryTime:2023-10-06 02:42:02 +0000 UTC Type:0 Mac:52:54:00:3b:3a:c7 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:newest-cni-516412 Clientid:01:52:54:00:3b:3a:c7}
	I1006 01:42:34.116727  123852 main.go:141] libmachine: (newest-cni-516412) DBG | domain newest-cni-516412 has defined IP address 192.168.61.107 and MAC address 52:54:00:3b:3a:c7 in network mk-newest-cni-516412
	I1006 01:42:34.116750  123852 main.go:141] libmachine: (newest-cni-516412) Calling .DriverName
	I1006 01:42:34.116811  123852 main.go:141] libmachine: (newest-cni-516412) Calling .GetSSHPort
	I1006 01:42:34.116943  123852 main.go:141] libmachine: (newest-cni-516412) Calling .GetSSHKeyPath
	I1006 01:42:34.118640  123852 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1006 01:42:34.117126  123852 main.go:141] libmachine: (newest-cni-516412) Calling .GetSSHUsername
	I1006 01:42:34.117305  123852 main.go:141] libmachine: (newest-cni-516412) DBG | domain newest-cni-516412 has defined MAC address 52:54:00:3b:3a:c7 in network mk-newest-cni-516412
	I1006 01:42:34.118155  123852 main.go:141] libmachine: (newest-cni-516412) Calling .GetSSHPort
	I1006 01:42:34.120136  123852 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 01:42:34.120152  123852 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1006 01:42:34.120177  123852 main.go:141] libmachine: (newest-cni-516412) Calling .GetSSHHostname
	I1006 01:42:34.120234  123852 main.go:141] libmachine: (newest-cni-516412) Calling .GetSSHKeyPath
	I1006 01:42:34.120265  123852 main.go:141] libmachine: (newest-cni-516412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:3a:c7", ip: ""} in network mk-newest-cni-516412: {Iface:virbr3 ExpiryTime:2023-10-06 02:42:02 +0000 UTC Type:0 Mac:52:54:00:3b:3a:c7 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:newest-cni-516412 Clientid:01:52:54:00:3b:3a:c7}
	I1006 01:42:34.120290  123852 main.go:141] libmachine: (newest-cni-516412) DBG | domain newest-cni-516412 has defined IP address 192.168.61.107 and MAC address 52:54:00:3b:3a:c7 in network mk-newest-cni-516412
	I1006 01:42:34.120486  123852 main.go:141] libmachine: (newest-cni-516412) Calling .GetSSHUsername
	I1006 01:42:34.120485  123852 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17314-68418/.minikube/machines/newest-cni-516412/id_rsa Username:docker}
	I1006 01:42:34.120620  123852 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17314-68418/.minikube/machines/newest-cni-516412/id_rsa Username:docker}
	I1006 01:42:34.121865  123852 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40793
	I1006 01:42:34.122208  123852 main.go:141] libmachine: () Calling .GetVersion
	I1006 01:42:34.122691  123852 main.go:141] libmachine: Using API Version  1
	I1006 01:42:34.122708  123852 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 01:42:34.123062  123852 main.go:141] libmachine: () Calling .GetMachineName
	I1006 01:42:34.123196  123852 main.go:141] libmachine: (newest-cni-516412) Calling .GetState
	I1006 01:42:34.123586  123852 main.go:141] libmachine: (newest-cni-516412) DBG | domain newest-cni-516412 has defined MAC address 52:54:00:3b:3a:c7 in network mk-newest-cni-516412
	I1006 01:42:34.124252  123852 main.go:141] libmachine: (newest-cni-516412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:3a:c7", ip: ""} in network mk-newest-cni-516412: {Iface:virbr3 ExpiryTime:2023-10-06 02:42:02 +0000 UTC Type:0 Mac:52:54:00:3b:3a:c7 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:newest-cni-516412 Clientid:01:52:54:00:3b:3a:c7}
	I1006 01:42:34.124289  123852 main.go:141] libmachine: (newest-cni-516412) DBG | domain newest-cni-516412 has defined IP address 192.168.61.107 and MAC address 52:54:00:3b:3a:c7 in network mk-newest-cni-516412
	I1006 01:42:34.124441  123852 main.go:141] libmachine: (newest-cni-516412) Calling .GetSSHPort
	I1006 01:42:34.124625  123852 main.go:141] libmachine: (newest-cni-516412) Calling .GetSSHKeyPath
	I1006 01:42:34.124795  123852 main.go:141] libmachine: (newest-cni-516412) Calling .GetSSHUsername
	I1006 01:42:34.124927  123852 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17314-68418/.minikube/machines/newest-cni-516412/id_rsa Username:docker}
	I1006 01:42:34.126004  123852 main.go:141] libmachine: (newest-cni-516412) Calling .DriverName
	I1006 01:42:34.128077  123852 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1006 01:42:34.129418  123852 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I1006 01:42:34.043266  120254 system_pods.go:86] 8 kube-system pods found
	I1006 01:42:34.043303  120254 system_pods.go:89] "coredns-5644d7b6d9-4pfcz" [56d0d597-ff05-4887-9112-6509320988bb] Running
	I1006 01:42:34.043313  120254 system_pods.go:89] "etcd-old-k8s-version-456697" [276fcd61-2966-4779-ae1c-8141f6d08b67] Running
	I1006 01:42:34.043320  120254 system_pods.go:89] "kube-apiserver-old-k8s-version-456697" [b91fbdcd-ba58-4289-bdf7-44904e1b25db] Running
	I1006 01:42:34.043327  120254 system_pods.go:89] "kube-controller-manager-old-k8s-version-456697" [4fb9c67b-ceae-4ef3-b74e-7e36ca9b5984] Running
	I1006 01:42:34.043334  120254 system_pods.go:89] "kube-proxy-9h6k5" [4302798d-698e-435d-bdb3-ff2d185bfd97] Running
	I1006 01:42:34.043341  120254 system_pods.go:89] "kube-scheduler-old-k8s-version-456697" [9a7012e2-cd19-440a-90f7-214122982fb6] Running
	I1006 01:42:34.043351  120254 system_pods.go:89] "metrics-server-74d5856cc6-72rfp" [96e0ee39-d033-4749-94de-7dc5895a0ba1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1006 01:42:34.043361  120254 system_pods.go:89] "storage-provisioner" [ba75e9ab-d81f-4495-98e1-1ac980f95b9b] Running
	I1006 01:42:34.043373  120254 system_pods.go:126] duration metric: took 1m21.197929477s to wait for k8s-apps to be running ...
	I1006 01:42:34.043387  120254 system_svc.go:44] waiting for kubelet service to be running ....
	I1006 01:42:34.043444  120254 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 01:42:34.063180  120254 system_svc.go:56] duration metric: took 19.783655ms WaitForService to wait for kubelet.
	I1006 01:42:34.063202  120254 kubeadm.go:581] duration metric: took 1m25.071245746s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1006 01:42:34.063223  120254 node_conditions.go:102] verifying NodePressure condition ...
	I1006 01:42:34.070115  120254 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1006 01:42:34.070144  120254 node_conditions.go:123] node cpu capacity is 2
	I1006 01:42:34.070155  120254 node_conditions.go:105] duration metric: took 6.925975ms to run NodePressure ...
	I1006 01:42:34.070170  120254 start.go:228] waiting for startup goroutines ...
	I1006 01:42:34.070181  120254 start.go:233] waiting for cluster config update ...
	I1006 01:42:34.070194  120254 start.go:242] writing updated cluster config ...
	I1006 01:42:34.070482  120254 ssh_runner.go:195] Run: rm -f paused
	I1006 01:42:34.138738  120254 start.go:600] kubectl: 1.28.2, cluster: 1.16.0 (minor skew: 12)
	I1006 01:42:34.140441  120254 out.go:177] 
	W1006 01:42:34.142147  120254 out.go:239] ! /usr/local/bin/kubectl is version 1.28.2, which may have incompatibilities with Kubernetes 1.16.0.
	I1006 01:42:34.143638  120254 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1006 01:42:34.145616  120254 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-456697" cluster and "default" namespace by default
	I1006 01:42:34.130713  123852 addons.go:423] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1006 01:42:34.130736  123852 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1006 01:42:34.130761  123852 main.go:141] libmachine: (newest-cni-516412) Calling .GetSSHHostname
	I1006 01:42:34.133931  123852 main.go:141] libmachine: (newest-cni-516412) DBG | domain newest-cni-516412 has defined MAC address 52:54:00:3b:3a:c7 in network mk-newest-cni-516412
	I1006 01:42:34.134321  123852 main.go:141] libmachine: (newest-cni-516412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:3a:c7", ip: ""} in network mk-newest-cni-516412: {Iface:virbr3 ExpiryTime:2023-10-06 02:42:02 +0000 UTC Type:0 Mac:52:54:00:3b:3a:c7 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:newest-cni-516412 Clientid:01:52:54:00:3b:3a:c7}
	I1006 01:42:34.134351  123852 main.go:141] libmachine: (newest-cni-516412) DBG | domain newest-cni-516412 has defined IP address 192.168.61.107 and MAC address 52:54:00:3b:3a:c7 in network mk-newest-cni-516412
	I1006 01:42:34.134608  123852 main.go:141] libmachine: (newest-cni-516412) Calling .GetSSHPort
	I1006 01:42:34.134796  123852 main.go:141] libmachine: (newest-cni-516412) Calling .GetSSHKeyPath
	I1006 01:42:34.134967  123852 main.go:141] libmachine: (newest-cni-516412) Calling .GetSSHUsername
	I1006 01:42:34.135130  123852 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17314-68418/.minikube/machines/newest-cni-516412/id_rsa Username:docker}
	I1006 01:42:34.141854  123852 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43839
	I1006 01:42:34.142381  123852 main.go:141] libmachine: () Calling .GetVersion
	I1006 01:42:34.142998  123852 main.go:141] libmachine: Using API Version  1
	I1006 01:42:34.143022  123852 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 01:42:34.143441  123852 main.go:141] libmachine: () Calling .GetMachineName
	I1006 01:42:34.144137  123852 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1006 01:42:34.144191  123852 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 01:42:34.161576  123852 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44223
	I1006 01:42:34.162117  123852 main.go:141] libmachine: () Calling .GetVersion
	I1006 01:42:34.162694  123852 main.go:141] libmachine: Using API Version  1
	I1006 01:42:34.162722  123852 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 01:42:34.163114  123852 main.go:141] libmachine: () Calling .GetMachineName
	I1006 01:42:34.163315  123852 main.go:141] libmachine: (newest-cni-516412) Calling .GetState
	I1006 01:42:34.165021  123852 main.go:141] libmachine: (newest-cni-516412) Calling .DriverName
	I1006 01:42:34.165315  123852 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1006 01:42:34.165336  123852 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1006 01:42:34.165363  123852 main.go:141] libmachine: (newest-cni-516412) Calling .GetSSHHostname
	I1006 01:42:34.168603  123852 main.go:141] libmachine: (newest-cni-516412) DBG | domain newest-cni-516412 has defined MAC address 52:54:00:3b:3a:c7 in network mk-newest-cni-516412
	I1006 01:42:34.169603  123852 main.go:141] libmachine: (newest-cni-516412) Calling .GetSSHPort
	I1006 01:42:34.169615  123852 main.go:141] libmachine: (newest-cni-516412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:3a:c7", ip: ""} in network mk-newest-cni-516412: {Iface:virbr3 ExpiryTime:2023-10-06 02:42:02 +0000 UTC Type:0 Mac:52:54:00:3b:3a:c7 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:newest-cni-516412 Clientid:01:52:54:00:3b:3a:c7}
	I1006 01:42:34.169646  123852 main.go:141] libmachine: (newest-cni-516412) DBG | domain newest-cni-516412 has defined IP address 192.168.61.107 and MAC address 52:54:00:3b:3a:c7 in network mk-newest-cni-516412
	I1006 01:42:34.169797  123852 main.go:141] libmachine: (newest-cni-516412) Calling .GetSSHKeyPath
	I1006 01:42:34.169954  123852 main.go:141] libmachine: (newest-cni-516412) Calling .GetSSHUsername
	I1006 01:42:34.170082  123852 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17314-68418/.minikube/machines/newest-cni-516412/id_rsa Username:docker}
	I1006 01:42:34.261753  123852 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 01:42:34.268962  123852 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1006 01:42:34.268985  123852 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1006 01:42:34.317176  123852 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1006 01:42:34.317203  123852 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1006 01:42:34.366475  123852 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1006 01:42:34.398170  123852 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1006 01:42:34.398196  123852 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1006 01:42:34.440660  123852 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1006 01:42:34.440684  123852 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1006 01:42:34.479286  123852 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1006 01:42:34.479309  123852 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1006 01:42:34.526365  123852 api_server.go:52] waiting for apiserver process to appear ...
	I1006 01:42:34.526397  123852 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1006 01:42:34.526426  123852 docker.go:689] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.28.2
	registry.k8s.io/kube-scheduler:v1.28.2
	registry.k8s.io/kube-proxy:v1.28.2
	registry.k8s.io/kube-controller-manager:v1.28.2
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1006 01:42:34.526444  123852 cache_images.go:84] Images are preloaded, skipping loading
	I1006 01:42:34.526455  123852 cache_images.go:262] succeeded pushing to: newest-cni-516412
	I1006 01:42:34.526462  123852 cache_images.go:263] failed pushing to: 
	I1006 01:42:34.526462  123852 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 01:42:34.526502  123852 main.go:141] libmachine: Making call to close driver server
	I1006 01:42:34.526516  123852 main.go:141] libmachine: (newest-cni-516412) Calling .Close
	I1006 01:42:34.526798  123852 main.go:141] libmachine: (newest-cni-516412) DBG | Closing plugin on server side
	I1006 01:42:34.526829  123852 main.go:141] libmachine: Successfully made call to close driver server
	I1006 01:42:34.526843  123852 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 01:42:34.526854  123852 main.go:141] libmachine: Making call to close driver server
	I1006 01:42:34.526871  123852 main.go:141] libmachine: (newest-cni-516412) Calling .Close
	I1006 01:42:34.527131  123852 main.go:141] libmachine: (newest-cni-516412) DBG | Closing plugin on server side
	I1006 01:42:34.527158  123852 main.go:141] libmachine: Successfully made call to close driver server
	I1006 01:42:34.527176  123852 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 01:42:34.541357  123852 addons.go:423] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1006 01:42:34.541385  123852 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1006 01:42:34.581601  123852 addons.go:423] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1006 01:42:34.581624  123852 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1006 01:42:34.617808  123852 addons.go:423] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1006 01:42:34.617854  123852 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1006 01:42:34.639319  123852 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1006 01:42:34.710138  123852 addons.go:423] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1006 01:42:34.710163  123852 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1006 01:42:34.860030  123852 addons.go:423] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1006 01:42:34.860057  123852 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1006 01:42:34.930679  123852 addons.go:423] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1006 01:42:34.930707  123852 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1006 01:42:34.978241  123852 addons.go:423] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1006 01:42:34.978274  123852 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1006 01:42:35.029754  123852 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1006 01:42:36.316929  123852 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.055131436s)
	I1006 01:42:36.316986  123852 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.950465235s)
	I1006 01:42:36.316998  123852 main.go:141] libmachine: Making call to close driver server
	I1006 01:42:36.317012  123852 main.go:141] libmachine: (newest-cni-516412) Calling .Close
	I1006 01:42:36.317025  123852 main.go:141] libmachine: Making call to close driver server
	I1006 01:42:36.317042  123852 main.go:141] libmachine: (newest-cni-516412) Calling .Close
	I1006 01:42:36.317068  123852 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.790578856s)
	I1006 01:42:36.317105  123852 api_server.go:72] duration metric: took 2.244197763s to wait for apiserver process to appear ...
	I1006 01:42:36.317117  123852 api_server.go:88] waiting for apiserver healthz status ...
	I1006 01:42:36.317134  123852 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I1006 01:42:36.317479  123852 main.go:141] libmachine: (newest-cni-516412) DBG | Closing plugin on server side
	I1006 01:42:36.317496  123852 main.go:141] libmachine: (newest-cni-516412) DBG | Closing plugin on server side
	I1006 01:42:36.317498  123852 main.go:141] libmachine: Successfully made call to close driver server
	I1006 01:42:36.317563  123852 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 01:42:36.317584  123852 main.go:141] libmachine: Making call to close driver server
	I1006 01:42:36.317594  123852 main.go:141] libmachine: (newest-cni-516412) Calling .Close
	I1006 01:42:36.317510  123852 main.go:141] libmachine: Successfully made call to close driver server
	I1006 01:42:36.317655  123852 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 01:42:36.317666  123852 main.go:141] libmachine: Making call to close driver server
	I1006 01:42:36.317675  123852 main.go:141] libmachine: (newest-cni-516412) Calling .Close
	I1006 01:42:36.317916  123852 main.go:141] libmachine: Successfully made call to close driver server
	I1006 01:42:36.317936  123852 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 01:42:36.317956  123852 main.go:141] libmachine: (newest-cni-516412) DBG | Closing plugin on server side
	I1006 01:42:36.318053  123852 main.go:141] libmachine: (newest-cni-516412) DBG | Closing plugin on server side
	I1006 01:42:36.318078  123852 main.go:141] libmachine: Successfully made call to close driver server
	I1006 01:42:36.318096  123852 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 01:42:36.329064  123852 main.go:141] libmachine: Making call to close driver server
	I1006 01:42:36.329091  123852 main.go:141] libmachine: (newest-cni-516412) Calling .Close
	I1006 01:42:36.329334  123852 main.go:141] libmachine: Successfully made call to close driver server
	I1006 01:42:36.329352  123852 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 01:42:36.329901  123852 api_server.go:279] https://192.168.61.107:8443/healthz returned 200:
	ok
	I1006 01:42:36.331030  123852 api_server.go:141] control plane version: v1.28.2
	I1006 01:42:36.331051  123852 api_server.go:131] duration metric: took 13.927365ms to wait for apiserver health ...
	I1006 01:42:36.331059  123852 system_pods.go:43] waiting for kube-system pods to appear ...
	I1006 01:42:36.342384  123852 system_pods.go:59] 8 kube-system pods found
	I1006 01:42:36.342413  123852 system_pods.go:61] "coredns-5dd5756b68-chtwr" [47db9eff-1ea8-4208-8e1d-a5a379a226ea] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1006 01:42:36.342422  123852 system_pods.go:61] "etcd-newest-cni-516412" [37d0b87a-f92d-46ef-8b52-0978b969f77f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1006 01:42:36.342432  123852 system_pods.go:61] "kube-apiserver-newest-cni-516412" [607fba0a-3396-44ea-8ad1-f5aaa1bb72d3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1006 01:42:36.342439  123852 system_pods.go:61] "kube-controller-manager-newest-cni-516412" [9ec8626c-0c4b-4710-9f1d-c60e4e21c2ed] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1006 01:42:36.342444  123852 system_pods.go:61] "kube-proxy-68ldw" [8c0ddd2c-d561-4e3c-8fd0-f794e546389e] Running
	I1006 01:42:36.342450  123852 system_pods.go:61] "kube-scheduler-newest-cni-516412" [46f538ba-a327-4510-9879-c88916158d3f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1006 01:42:36.342455  123852 system_pods.go:61] "metrics-server-57f55c9bc5-pdjtx" [1e0410b4-4b16-44c4-a1a1-9f7c950257eb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1006 01:42:36.342460  123852 system_pods.go:61] "storage-provisioner" [219ca167-3b81-4c11-b9d0-1636bded191c] Running
	I1006 01:42:36.342467  123852 system_pods.go:74] duration metric: took 11.40237ms to wait for pod list to return data ...
	I1006 01:42:36.342477  123852 default_sa.go:34] waiting for default service account to be created ...
	I1006 01:42:36.345126  123852 default_sa.go:45] found service account: "default"
	I1006 01:42:36.345154  123852 default_sa.go:55] duration metric: took 2.670449ms for default service account to be created ...
	I1006 01:42:36.345165  123852 kubeadm.go:581] duration metric: took 2.272259979s to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I1006 01:42:36.345182  123852 node_conditions.go:102] verifying NodePressure condition ...
	I1006 01:42:36.349236  123852 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1006 01:42:36.349258  123852 node_conditions.go:123] node cpu capacity is 2
	I1006 01:42:36.349268  123852 node_conditions.go:105] duration metric: took 4.081522ms to run NodePressure ...
	I1006 01:42:36.349281  123852 start.go:228] waiting for startup goroutines ...
	I1006 01:42:36.540608  123852 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.901242796s)
	I1006 01:42:36.540667  123852 main.go:141] libmachine: Making call to close driver server
	I1006 01:42:36.540680  123852 main.go:141] libmachine: (newest-cni-516412) Calling .Close
	I1006 01:42:36.541022  123852 main.go:141] libmachine: Successfully made call to close driver server
	I1006 01:42:36.541049  123852 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 01:42:36.541066  123852 main.go:141] libmachine: Making call to close driver server
	I1006 01:42:36.541076  123852 main.go:141] libmachine: (newest-cni-516412) Calling .Close
	I1006 01:42:36.541074  123852 main.go:141] libmachine: (newest-cni-516412) DBG | Closing plugin on server side
	I1006 01:42:36.541486  123852 main.go:141] libmachine: Successfully made call to close driver server
	I1006 01:42:36.541505  123852 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 01:42:36.541518  123852 addons.go:467] Verifying addon metrics-server=true in "newest-cni-516412"
	I1006 01:42:36.541519  123852 main.go:141] libmachine: (newest-cni-516412) DBG | Closing plugin on server side
	I1006 01:42:37.178077  123852 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.148267954s)
	I1006 01:42:37.178172  123852 main.go:141] libmachine: Making call to close driver server
	I1006 01:42:37.178193  123852 main.go:141] libmachine: (newest-cni-516412) Calling .Close
	I1006 01:42:37.178529  123852 main.go:141] libmachine: (newest-cni-516412) DBG | Closing plugin on server side
	I1006 01:42:37.178574  123852 main.go:141] libmachine: Successfully made call to close driver server
	I1006 01:42:37.178589  123852 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 01:42:37.178607  123852 main.go:141] libmachine: Making call to close driver server
	I1006 01:42:37.178622  123852 main.go:141] libmachine: (newest-cni-516412) Calling .Close
	I1006 01:42:37.178879  123852 main.go:141] libmachine: Successfully made call to close driver server
	I1006 01:42:37.178898  123852 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 01:42:37.180742  123852 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-516412 addons enable metrics-server	
	
	
	I1006 01:42:37.182332  123852 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I1006 01:42:37.183796  123852 addons.go:502] enable addons completed in 3.122994187s: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I1006 01:42:37.183835  123852 start.go:233] waiting for cluster config update ...
	I1006 01:42:37.183853  123852 start.go:242] writing updated cluster config ...
	I1006 01:42:37.184116  123852 ssh_runner.go:195] Run: rm -f paused
	I1006 01:42:37.239295  123852 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I1006 01:42:37.242292  123852 out.go:177] * Done! kubectl is now configured to use "newest-cni-516412" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Fri 2023-10-06 01:35:00 UTC, ends at Fri 2023-10-06 01:42:45 UTC. --
	Oct 06 01:41:28 old-k8s-version-456697 dockerd[1093]: time="2023-10-06T01:41:28.512583789Z" level=warning msg="cleaning up after shim disconnected" id=c320c46ea8c05f36dfad256dc98452f4e7b7e4bba567ecb5f6f6e4f6c93a25bd namespace=moby
	Oct 06 01:41:28 old-k8s-version-456697 dockerd[1093]: time="2023-10-06T01:41:28.512707032Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 06 01:41:28 old-k8s-version-456697 dockerd[1087]: time="2023-10-06T01:41:28.514230208Z" level=info msg="ignoring event" container=c320c46ea8c05f36dfad256dc98452f4e7b7e4bba567ecb5f6f6e4f6c93a25bd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 06 01:41:38 old-k8s-version-456697 dockerd[1093]: time="2023-10-06T01:41:38.641742831Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 06 01:41:38 old-k8s-version-456697 dockerd[1093]: time="2023-10-06T01:41:38.641844498Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 06 01:41:38 old-k8s-version-456697 dockerd[1093]: time="2023-10-06T01:41:38.641879830Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 06 01:41:38 old-k8s-version-456697 dockerd[1093]: time="2023-10-06T01:41:38.641900667Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 06 01:41:39 old-k8s-version-456697 dockerd[1087]: time="2023-10-06T01:41:39.072764202Z" level=info msg="ignoring event" container=eaac5b36985a34f38421a2ed8db5f33d48b87624d3641c9f0cbdf2f372a583d7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 06 01:41:39 old-k8s-version-456697 dockerd[1093]: time="2023-10-06T01:41:39.074194557Z" level=info msg="shim disconnected" id=eaac5b36985a34f38421a2ed8db5f33d48b87624d3641c9f0cbdf2f372a583d7 namespace=moby
	Oct 06 01:41:39 old-k8s-version-456697 dockerd[1093]: time="2023-10-06T01:41:39.074308509Z" level=warning msg="cleaning up after shim disconnected" id=eaac5b36985a34f38421a2ed8db5f33d48b87624d3641c9f0cbdf2f372a583d7 namespace=moby
	Oct 06 01:41:39 old-k8s-version-456697 dockerd[1093]: time="2023-10-06T01:41:39.074320721Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 06 01:41:52 old-k8s-version-456697 dockerd[1087]: time="2023-10-06T01:41:52.111971337Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Oct 06 01:41:52 old-k8s-version-456697 dockerd[1087]: time="2023-10-06T01:41:52.112383882Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Oct 06 01:41:52 old-k8s-version-456697 dockerd[1087]: time="2023-10-06T01:41:52.117100606Z" level=error msg="Handler for POST /images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Oct 06 01:42:00 old-k8s-version-456697 dockerd[1093]: time="2023-10-06T01:42:00.188973431Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 06 01:42:00 old-k8s-version-456697 dockerd[1093]: time="2023-10-06T01:42:00.189151634Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 06 01:42:00 old-k8s-version-456697 dockerd[1093]: time="2023-10-06T01:42:00.189793255Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 06 01:42:00 old-k8s-version-456697 dockerd[1093]: time="2023-10-06T01:42:00.189954064Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 06 01:42:00 old-k8s-version-456697 dockerd[1093]: time="2023-10-06T01:42:00.600973144Z" level=info msg="shim disconnected" id=01b5b51e46ceae282a985edcbfa208b8afdf1a7319faa4c573b2179764c4e25a namespace=moby
	Oct 06 01:42:00 old-k8s-version-456697 dockerd[1093]: time="2023-10-06T01:42:00.601543422Z" level=warning msg="cleaning up after shim disconnected" id=01b5b51e46ceae282a985edcbfa208b8afdf1a7319faa4c573b2179764c4e25a namespace=moby
	Oct 06 01:42:00 old-k8s-version-456697 dockerd[1087]: time="2023-10-06T01:42:00.601501503Z" level=info msg="ignoring event" container=01b5b51e46ceae282a985edcbfa208b8afdf1a7319faa4c573b2179764c4e25a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 06 01:42:00 old-k8s-version-456697 dockerd[1093]: time="2023-10-06T01:42:00.601798389Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 06 01:42:40 old-k8s-version-456697 dockerd[1087]: time="2023-10-06T01:42:40.111694220Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Oct 06 01:42:40 old-k8s-version-456697 dockerd[1087]: time="2023-10-06T01:42:40.111775329Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Oct 06 01:42:40 old-k8s-version-456697 dockerd[1087]: time="2023-10-06T01:42:40.115180507Z" level=error msg="Handler for POST /images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE                    COMMAND                  CREATED              STATUS                      PORTS     NAMES
	01b5b51e46ce   a90209bb39e3             "nginx -g 'daemon of…"   45 seconds ago       Exited (1) 44 seconds ago             k8s_dashboard-metrics-scraper_dashboard-metrics-scraper-d6b4b5544-xd2xz_kubernetes-dashboard_919538a7-8607-49e6-a802-fb773e5e06f4_3
	012a09b34f73   kubernetesui/dashboard   "/dashboard --insecu…"   About a minute ago   Up About a minute                     k8s_kubernetes-dashboard_kubernetes-dashboard-84b68f675b-xlkzn_kubernetes-dashboard_cd59a10b-708e-4ae4-b425-a95fb34e6ad8_0
	183991e1910f   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_kubernetes-dashboard-84b68f675b-xlkzn_kubernetes-dashboard_cd59a10b-708e-4ae4-b425-a95fb34e6ad8_0
	edc089d1c70c   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_dashboard-metrics-scraper-d6b4b5544-xd2xz_kubernetes-dashboard_919538a7-8607-49e6-a802-fb773e5e06f4_0
	71a6dff3e8ba   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_metrics-server-74d5856cc6-72rfp_kube-system_96e0ee39-d033-4749-94de-7dc5895a0ba1_0
	c36592e1f934   6e38f40d628d             "/storage-provisioner"   About a minute ago   Up About a minute                     k8s_storage-provisioner_storage-provisioner_kube-system_ba75e9ab-d81f-4495-98e1-1ac980f95b9b_0
	b00dfd37ab18   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_storage-provisioner_kube-system_ba75e9ab-d81f-4495-98e1-1ac980f95b9b_0
	69c9e8177b2f   c21b0c7400f9             "/usr/local/bin/kube…"   About a minute ago   Up About a minute                     k8s_kube-proxy_kube-proxy-9h6k5_kube-system_4302798d-698e-435d-bdb3-ff2d185bfd97_0
	36c604da0cb0   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_kube-proxy-9h6k5_kube-system_4302798d-698e-435d-bdb3-ff2d185bfd97_0
	b66ceea688e3   bf261d157914             "/coredns -conf /etc…"   About a minute ago   Up About a minute                     k8s_coredns_coredns-5644d7b6d9-4pfcz_kube-system_56d0d597-ff05-4887-9112-6509320988bb_0
	a49bf3c015a6   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_coredns-5644d7b6d9-4pfcz_kube-system_56d0d597-ff05-4887-9112-6509320988bb_0
	cf2dbe51ed4e   b2756210eeab             "etcd --advertise-cl…"   2 minutes ago        Up 2 minutes                          k8s_etcd_etcd-old-k8s-version-456697_kube-system_c6fe7d3aaaeecf320c1ca2f9d9b0b8be_0
	aa490e92da24   301ddc62b80b             "kube-scheduler --au…"   2 minutes ago        Up 2 minutes                          k8s_kube-scheduler_kube-scheduler-old-k8s-version-456697_kube-system_b3d303074fe0ca1d42a8bd9ed248df09_0
	685e98025168   b305571ca60a             "kube-apiserver --ad…"   2 minutes ago        Up 2 minutes                          k8s_kube-apiserver_kube-apiserver-old-k8s-version-456697_kube-system_7e96a2c40423236d5067d038fb52891e_0
	8ba09f125093   k8s.gcr.io/pause:3.1     "/pause"                 2 minutes ago        Up 2 minutes                          k8s_POD_etcd-old-k8s-version-456697_kube-system_c6fe7d3aaaeecf320c1ca2f9d9b0b8be_0
	d79893226d3e   k8s.gcr.io/pause:3.1     "/pause"                 2 minutes ago        Up 2 minutes                          k8s_POD_kube-scheduler-old-k8s-version-456697_kube-system_b3d303074fe0ca1d42a8bd9ed248df09_0
	8c55b4dbb5ed   06a629a7e51c             "kube-controller-man…"   2 minutes ago        Up 2 minutes                          k8s_kube-controller-manager_kube-controller-manager-old-k8s-version-456697_kube-system_7376ddb4f190a0ded9394063437bcb4e_0
	c2e30a29eca8   k8s.gcr.io/pause:3.1     "/pause"                 2 minutes ago        Up 2 minutes                          k8s_POD_kube-apiserver-old-k8s-version-456697_kube-system_7e96a2c40423236d5067d038fb52891e_0
	b71c2b530ce9   k8s.gcr.io/pause:3.1     "/pause"                 2 minutes ago        Up 2 minutes                          k8s_POD_kube-controller-manager-old-k8s-version-456697_kube-system_7376ddb4f190a0ded9394063437bcb4e_0
	time="2023-10-06T01:42:45Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unimplemented desc = unknown service runtime.v1.RuntimeService"
	
	* 
	* ==> coredns [b66ceea688e3] <==
	* .:53
	2023-10-06T01:41:10.487Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76
	2023-10-06T01:41:10.487Z [INFO] CoreDNS-1.6.2
	2023-10-06T01:41:10.487Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	[INFO] Reloading
	2023-10-06T01:41:45.846Z [INFO] plugin/reload: Running configuration MD5 = 73c7bdb6903c83cd433a46b2e9eb4233
	[INFO] Reloading complete
	2023-10-06T01:41:45.906Z [INFO] 127.0.0.1:56218 - 59383 "HINFO IN 3914746339893633893.8851680893609424892. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.060239842s
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-456697
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-456697
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=84890cb24d0240d9d992d7c7712ee519ceed4154
	                    minikube.k8s.io/name=old-k8s-version-456697
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_06T01_40_52_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 06 Oct 2023 01:40:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 06 Oct 2023 01:41:48 +0000   Fri, 06 Oct 2023 01:40:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 06 Oct 2023 01:41:48 +0000   Fri, 06 Oct 2023 01:40:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 06 Oct 2023 01:41:48 +0000   Fri, 06 Oct 2023 01:40:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 06 Oct 2023 01:41:48 +0000   Fri, 06 Oct 2023 01:40:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.78
	  Hostname:    old-k8s-version-456697
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 ef77e32cc2424a84b083d261408be8c2
	 System UUID:                ef77e32c-c242-4a84-b083-d261408be8c2
	 Boot ID:                    5d2f5ed2-e483-4475-92b2-cd2f47784970
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  docker://24.0.6
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (10 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-5644d7b6d9-4pfcz                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     96s
	  kube-system                etcd-old-k8s-version-456697                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         31s
	  kube-system                kube-apiserver-old-k8s-version-456697             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         38s
	  kube-system                kube-controller-manager-old-k8s-version-456697    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         55s
	  kube-system                kube-proxy-9h6k5                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         97s
	  kube-system                kube-scheduler-old-k8s-version-456697             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         36s
	  kube-system                metrics-server-74d5856cc6-72rfp                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         93s
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         94s
	  kubernetes-dashboard       dashboard-metrics-scraper-d6b4b5544-xd2xz         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         92s
	  kubernetes-dashboard       kubernetes-dashboard-84b68f675b-xlkzn             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         92s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From                                Message
	  ----    ------                   ----                 ----                                -------
	  Normal  NodeHasSufficientMemory  2m4s (x8 over 2m4s)  kubelet, old-k8s-version-456697     Node old-k8s-version-456697 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m4s (x7 over 2m4s)  kubelet, old-k8s-version-456697     Node old-k8s-version-456697 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m4s (x8 over 2m4s)  kubelet, old-k8s-version-456697     Node old-k8s-version-456697 status is now: NodeHasSufficientPID
	  Normal  Starting                 94s                  kube-proxy, old-k8s-version-456697  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Oct 6 01:34] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.067910] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.428817] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.009382] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.121857] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[Oct 6 01:35] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000013] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.077924] systemd-fstab-generator[514]: Ignoring "noauto" for root device
	[  +0.161778] systemd-fstab-generator[525]: Ignoring "noauto" for root device
	[  +1.300464] systemd-fstab-generator[796]: Ignoring "noauto" for root device
	[  +0.325902] systemd-fstab-generator[833]: Ignoring "noauto" for root device
	[  +0.142789] systemd-fstab-generator[844]: Ignoring "noauto" for root device
	[  +0.150452] systemd-fstab-generator[857]: Ignoring "noauto" for root device
	[  +6.162158] systemd-fstab-generator[1078]: Ignoring "noauto" for root device
	[  +3.557597] kauditd_printk_skb: 67 callbacks suppressed
	[ +13.125514] systemd-fstab-generator[1500]: Ignoring "noauto" for root device
	[  +0.502086] kauditd_printk_skb: 29 callbacks suppressed
	[  +0.182518] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Oct 6 01:36] kauditd_printk_skb: 6 callbacks suppressed
	[Oct 6 01:40] systemd-fstab-generator[5586]: Ignoring "noauto" for root device
	[Oct 6 01:41] kauditd_printk_skb: 6 callbacks suppressed
	
	* 
	* ==> etcd [cf2dbe51ed4e] <==
	* 2023-10-06 01:40:43.916125 I | raft: 83fde65c75733ea3 became leader at term 2
	2023-10-06 01:40:43.920966 I | raft: raft.node: 83fde65c75733ea3 elected leader 83fde65c75733ea3 at term 2
	2023-10-06 01:40:43.951312 I | etcdserver: setting up the initial cluster version to 3.3
	2023-10-06 01:40:43.952875 N | etcdserver/membership: set the initial cluster version to 3.3
	2023-10-06 01:40:43.952924 I | etcdserver/api: enabled capabilities for version 3.3
	2023-10-06 01:40:43.952956 I | etcdserver: published {Name:old-k8s-version-456697 ClientURLs:[https://192.168.39.78:2379]} to cluster 254f9db842b1870b
	2023-10-06 01:40:43.953725 I | embed: ready to serve client requests
	2023-10-06 01:40:43.954967 I | embed: serving client requests on 192.168.39.78:2379
	2023-10-06 01:40:43.955329 I | embed: ready to serve client requests
	2023-10-06 01:40:43.956191 I | embed: serving client requests on 127.0.0.1:2379
	2023-10-06 01:40:52.381236 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/pod-garbage-collector\" " with result "range_response_count:1 size:210" took too long (104.593113ms) to execute
	2023-10-06 01:40:52.382528 W | etcdserver: read-only range request "key:\"/registry/minions/old-k8s-version-456697\" " with result "range_response_count:1 size:2746" took too long (106.073694ms) to execute
	2023-10-06 01:41:02.916390 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" " with result "range_response_count:1 size:210" took too long (330.603341ms) to execute
	2023-10-06 01:41:03.396680 W | etcdserver: request "header:<ID:4513604095028255634 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" mod_revision:234 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" value_size:178 >> failure:<request_range:<key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" > >>" with result "size:16" took too long (349.650353ms) to execute
	2023-10-06 01:41:03.397586 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/pv-protection-controller\" " with result "range_response_count:0 size:5" took too long (461.111161ms) to execute
	2023-10-06 01:41:03.664655 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/pv-protection-controller\" " with result "range_response_count:1 size:216" took too long (256.135069ms) to execute
	2023-10-06 01:41:04.336470 W | etcdserver: request "header:<ID:4513604095028255642 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/pv-protection-controller\" mod_revision:237 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/pv-protection-controller\" value_size:184 >> failure:<request_range:<key:\"/registry/serviceaccounts/kube-system/pv-protection-controller\" > >>" with result "size:16" took too long (401.276488ms) to execute
	2023-10-06 01:41:04.336847 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/token-cleaner\" " with result "range_response_count:0 size:5" took too long (657.679417ms) to execute
	2023-10-06 01:41:04.337116 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (460.017207ms) to execute
	2023-10-06 01:41:04.454194 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/token-cleaner\" " with result "range_response_count:1 size:193" took too long (100.899791ms) to execute
	2023-10-06 01:41:04.669860 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/node-controller\" " with result "range_response_count:1 size:242" took too long (199.377333ms) to execute
	2023-10-06 01:41:12.300298 W | etcdserver: read-only range request "key:\"/registry/rolebindings/kubernetes-dashboard/kubernetes-dashboard\" " with result "range_response_count:0 size:5" took too long (112.112082ms) to execute
	2023-10-06 01:41:12.302056 W | etcdserver: read-only range request "key:\"/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544.178b611f62553a2b\" " with result "range_response_count:1 size:691" took too long (130.922051ms) to execute
	2023-10-06 01:41:12.302942 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kubernetes-dashboard/kubernetes-dashboard\" " with result "range_response_count:0 size:5" took too long (142.828607ms) to execute
	2023-10-06 01:41:21.290478 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:172" took too long (131.478442ms) to execute
	
	* 
	* ==> kernel <==
	*  01:42:45 up 7 min,  0 users,  load average: 0.59, 0.77, 0.39
	Linux old-k8s-version-456697 5.10.57 #1 SMP Mon Sep 18 23:12:38 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [685e98025168] <==
	* I1006 01:40:51.168822       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	W1006 01:40:51.182517       1 lease.go:222] Resetting endpoints for master service "kubernetes" to [192.168.39.78]
	I1006 01:40:51.184173       1 controller.go:606] quota admission added evaluator for: endpoints
	I1006 01:40:52.065232       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I1006 01:40:52.596872       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I1006 01:40:52.697790       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I1006 01:41:04.338835       1 trace.go:116] Trace[254173974]: "Get" url:/api/v1/namespaces/kube-system/serviceaccounts/token-cleaner (started: 2023-10-06 01:41:03.678489944 +0000 UTC m=+20.998686399) (total time: 660.30922ms):
	Trace[254173974]: [660.30922ms] [660.266795ms] END
	I1006 01:41:04.339250       1 trace.go:116] Trace[375549262]: "GuaranteedUpdate etcd3" type:*core.ServiceAccount (started: 2023-10-06 01:41:03.67769001 +0000 UTC m=+20.997886475) (total time: 661.5302ms):
	Trace[375549262]: [661.502336ms] [661.281715ms] Transaction committed
	I1006 01:41:04.339843       1 trace.go:116] Trace[1118330694]: "Update" url:/api/v1/namespaces/kube-system/serviceaccounts/pv-protection-controller (started: 2023-10-06 01:41:03.677576336 +0000 UTC m=+20.997772800) (total time: 662.249191ms):
	Trace[1118330694]: [662.205543ms] [662.138762ms] Object stored in database
	I1006 01:41:08.922091       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I1006 01:41:09.004527       1 controller.go:606] quota admission added evaluator for: events.events.k8s.io
	I1006 01:41:09.087674       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I1006 01:41:13.618333       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1006 01:41:13.618447       1 handler_proxy.go:99] no RequestInfo found in the context
	E1006 01:41:13.618530       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1006 01:41:13.618568       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1006 01:42:13.619067       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1006 01:42:13.619492       1 handler_proxy.go:99] no RequestInfo found in the context
	E1006 01:42:13.619630       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1006 01:42:13.619803       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [8c55b4dbb5ed] <==
	* E1006 01:41:12.128305       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544" failed with pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1006 01:41:12.128752       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-84b68f675b", UID:"8a32971f-aa87-4740-a462-b7d853f96029", APIVersion:"apps/v1", ResourceVersion:"410", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1006 01:41:12.129107       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-d6b4b5544", UID:"82575af7-bdb8-4689-b6b5-67b2ecfbded4", APIVersion:"apps/v1", ResourceVersion:"406", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1006 01:41:12.135261       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544" failed with pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1006 01:41:12.135320       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-d6b4b5544", UID:"82575af7-bdb8-4689-b6b5-67b2ecfbded4", APIVersion:"apps/v1", ResourceVersion:"406", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1006 01:41:12.151560       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544" failed with pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1006 01:41:12.152502       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-d6b4b5544", UID:"82575af7-bdb8-4689-b6b5-67b2ecfbded4", APIVersion:"apps/v1", ResourceVersion:"406", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1006 01:41:12.153760       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-84b68f675b" failed with pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1006 01:41:12.319386       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-84b68f675b" failed with pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1006 01:41:12.319881       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544" failed with pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1006 01:41:12.319970       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-84b68f675b", UID:"8a32971f-aa87-4740-a462-b7d853f96029", APIVersion:"apps/v1", ResourceVersion:"418", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1006 01:41:12.320064       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-d6b4b5544", UID:"82575af7-bdb8-4689-b6b5-67b2ecfbded4", APIVersion:"apps/v1", ResourceVersion:"406", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1006 01:41:12.393169       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-84b68f675b" failed with pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1006 01:41:12.393439       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-84b68f675b", UID:"8a32971f-aa87-4740-a462-b7d853f96029", APIVersion:"apps/v1", ResourceVersion:"418", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1006 01:41:12.426984       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-84b68f675b" failed with pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1006 01:41:12.427304       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-84b68f675b", UID:"8a32971f-aa87-4740-a462-b7d853f96029", APIVersion:"apps/v1", ResourceVersion:"418", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1006 01:41:12.705609       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"metrics-server-74d5856cc6", UID:"ee9bfca3-75d6-479c-900d-0289181457cf", APIVersion:"apps/v1", ResourceVersion:"371", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: metrics-server-74d5856cc6-72rfp
	I1006 01:41:13.467559       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-d6b4b5544", UID:"82575af7-bdb8-4689-b6b5-67b2ecfbded4", APIVersion:"apps/v1", ResourceVersion:"406", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: dashboard-metrics-scraper-d6b4b5544-xd2xz
	I1006 01:41:13.503838       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-84b68f675b", UID:"8a32971f-aa87-4740-a462-b7d853f96029", APIVersion:"apps/v1", ResourceVersion:"418", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kubernetes-dashboard-84b68f675b-xlkzn
	E1006 01:41:40.544814       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1006 01:41:41.159737       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1006 01:42:10.797214       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1006 01:42:13.162510       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1006 01:42:41.049686       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1006 01:42:45.164658       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [69c9e8177b2f] <==
	* W1006 01:41:11.118614       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I1006 01:41:11.154574       1 node.go:135] Successfully retrieved node IP: 192.168.39.78
	I1006 01:41:11.154716       1 server_others.go:149] Using iptables Proxier.
	I1006 01:41:11.164846       1 server.go:529] Version: v1.16.0
	I1006 01:41:11.167220       1 config.go:313] Starting service config controller
	I1006 01:41:11.167271       1 shared_informer.go:197] Waiting for caches to sync for service config
	I1006 01:41:11.182564       1 config.go:131] Starting endpoints config controller
	I1006 01:41:11.182679       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I1006 01:41:11.283470       1 shared_informer.go:204] Caches are synced for service config 
	I1006 01:41:11.294970       1 shared_informer.go:204] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [aa490e92da24] <==
	* W1006 01:40:47.869193       1 authentication.go:79] Authentication is disabled
	I1006 01:40:47.869207       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I1006 01:40:47.870928       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E1006 01:40:47.977324       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1006 01:40:47.982824       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1006 01:40:47.985232       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1006 01:40:47.988555       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1006 01:40:47.988602       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1006 01:40:47.988646       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1006 01:40:47.988679       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1006 01:40:47.988710       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1006 01:40:47.988737       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1006 01:40:47.988767       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1006 01:40:47.988796       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1006 01:40:48.980618       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1006 01:40:48.986864       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1006 01:40:48.987647       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1006 01:40:48.991774       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1006 01:40:48.997480       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1006 01:40:49.000674       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1006 01:40:49.004567       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1006 01:40:49.009492       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1006 01:40:49.010633       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1006 01:40:49.011935       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1006 01:40:49.020437       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Fri 2023-10-06 01:35:00 UTC, ends at Fri 2023-10-06 01:42:45 UTC. --
	Oct 06 01:41:30 old-k8s-version-456697 kubelet[5604]: E1006 01:41:30.076215    5604 pod_workers.go:191] Error syncing pod 919538a7-8607-49e6-a802-fb773e5e06f4 ("dashboard-metrics-scraper-d6b4b5544-xd2xz_kubernetes-dashboard(919538a7-8607-49e6-a802-fb773e5e06f4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-xd2xz_kubernetes-dashboard(919538a7-8607-49e6-a802-fb773e5e06f4)"
	Oct 06 01:41:39 old-k8s-version-456697 kubelet[5604]: W1006 01:41:39.150945    5604 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-xd2xz through plugin: invalid network status for
	Oct 06 01:41:39 old-k8s-version-456697 kubelet[5604]: E1006 01:41:39.181210    5604 pod_workers.go:191] Error syncing pod 919538a7-8607-49e6-a802-fb773e5e06f4 ("dashboard-metrics-scraper-d6b4b5544-xd2xz_kubernetes-dashboard(919538a7-8607-49e6-a802-fb773e5e06f4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-xd2xz_kubernetes-dashboard(919538a7-8607-49e6-a802-fb773e5e06f4)"
	Oct 06 01:41:40 old-k8s-version-456697 kubelet[5604]: E1006 01:41:40.100617    5604 pod_workers.go:191] Error syncing pod 96e0ee39-d033-4749-94de-7dc5895a0ba1 ("metrics-server-74d5856cc6-72rfp_kube-system(96e0ee39-d033-4749-94de-7dc5895a0ba1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 06 01:41:40 old-k8s-version-456697 kubelet[5604]: W1006 01:41:40.193746    5604 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-xd2xz through plugin: invalid network status for
	Oct 06 01:41:48 old-k8s-version-456697 kubelet[5604]: E1006 01:41:48.552143    5604 pod_workers.go:191] Error syncing pod 919538a7-8607-49e6-a802-fb773e5e06f4 ("dashboard-metrics-scraper-d6b4b5544-xd2xz_kubernetes-dashboard(919538a7-8607-49e6-a802-fb773e5e06f4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-xd2xz_kubernetes-dashboard(919538a7-8607-49e6-a802-fb773e5e06f4)"
	Oct 06 01:41:52 old-k8s-version-456697 kubelet[5604]: E1006 01:41:52.117580    5604 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Oct 06 01:41:52 old-k8s-version-456697 kubelet[5604]: E1006 01:41:52.117964    5604 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Oct 06 01:41:52 old-k8s-version-456697 kubelet[5604]: E1006 01:41:52.118271    5604 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Oct 06 01:41:52 old-k8s-version-456697 kubelet[5604]: E1006 01:41:52.118397    5604 pod_workers.go:191] Error syncing pod 96e0ee39-d033-4749-94de-7dc5895a0ba1 ("metrics-server-74d5856cc6-72rfp_kube-system(96e0ee39-d033-4749-94de-7dc5895a0ba1)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Oct 06 01:42:00 old-k8s-version-456697 kubelet[5604]: W1006 01:42:00.353556    5604 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-xd2xz through plugin: invalid network status for
	Oct 06 01:42:00 old-k8s-version-456697 kubelet[5604]: W1006 01:42:00.658674    5604 container.go:409] Failed to create summary reader for "/kubepods/besteffort/pod919538a7-8607-49e6-a802-fb773e5e06f4/01b5b51e46ceae282a985edcbfa208b8afdf1a7319faa4c573b2179764c4e25a": none of the resources are being tracked.
	Oct 06 01:42:01 old-k8s-version-456697 kubelet[5604]: W1006 01:42:01.595204    5604 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-xd2xz through plugin: invalid network status for
	Oct 06 01:42:01 old-k8s-version-456697 kubelet[5604]: E1006 01:42:01.600288    5604 pod_workers.go:191] Error syncing pod 919538a7-8607-49e6-a802-fb773e5e06f4 ("dashboard-metrics-scraper-d6b4b5544-xd2xz_kubernetes-dashboard(919538a7-8607-49e6-a802-fb773e5e06f4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-xd2xz_kubernetes-dashboard(919538a7-8607-49e6-a802-fb773e5e06f4)"
	Oct 06 01:42:02 old-k8s-version-456697 kubelet[5604]: W1006 01:42:02.610966    5604 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-xd2xz through plugin: invalid network status for
	Oct 06 01:42:05 old-k8s-version-456697 kubelet[5604]: E1006 01:42:05.100501    5604 pod_workers.go:191] Error syncing pod 96e0ee39-d033-4749-94de-7dc5895a0ba1 ("metrics-server-74d5856cc6-72rfp_kube-system(96e0ee39-d033-4749-94de-7dc5895a0ba1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 06 01:42:08 old-k8s-version-456697 kubelet[5604]: E1006 01:42:08.552151    5604 pod_workers.go:191] Error syncing pod 919538a7-8607-49e6-a802-fb773e5e06f4 ("dashboard-metrics-scraper-d6b4b5544-xd2xz_kubernetes-dashboard(919538a7-8607-49e6-a802-fb773e5e06f4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-xd2xz_kubernetes-dashboard(919538a7-8607-49e6-a802-fb773e5e06f4)"
	Oct 06 01:42:16 old-k8s-version-456697 kubelet[5604]: E1006 01:42:16.098552    5604 pod_workers.go:191] Error syncing pod 96e0ee39-d033-4749-94de-7dc5895a0ba1 ("metrics-server-74d5856cc6-72rfp_kube-system(96e0ee39-d033-4749-94de-7dc5895a0ba1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 06 01:42:23 old-k8s-version-456697 kubelet[5604]: E1006 01:42:23.097125    5604 pod_workers.go:191] Error syncing pod 919538a7-8607-49e6-a802-fb773e5e06f4 ("dashboard-metrics-scraper-d6b4b5544-xd2xz_kubernetes-dashboard(919538a7-8607-49e6-a802-fb773e5e06f4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-xd2xz_kubernetes-dashboard(919538a7-8607-49e6-a802-fb773e5e06f4)"
	Oct 06 01:42:27 old-k8s-version-456697 kubelet[5604]: E1006 01:42:27.102395    5604 pod_workers.go:191] Error syncing pod 96e0ee39-d033-4749-94de-7dc5895a0ba1 ("metrics-server-74d5856cc6-72rfp_kube-system(96e0ee39-d033-4749-94de-7dc5895a0ba1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 06 01:42:37 old-k8s-version-456697 kubelet[5604]: E1006 01:42:37.099906    5604 pod_workers.go:191] Error syncing pod 919538a7-8607-49e6-a802-fb773e5e06f4 ("dashboard-metrics-scraper-d6b4b5544-xd2xz_kubernetes-dashboard(919538a7-8607-49e6-a802-fb773e5e06f4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-xd2xz_kubernetes-dashboard(919538a7-8607-49e6-a802-fb773e5e06f4)"
	Oct 06 01:42:40 old-k8s-version-456697 kubelet[5604]: E1006 01:42:40.115746    5604 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Oct 06 01:42:40 old-k8s-version-456697 kubelet[5604]: E1006 01:42:40.115829    5604 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Oct 06 01:42:40 old-k8s-version-456697 kubelet[5604]: E1006 01:42:40.115911    5604 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Oct 06 01:42:40 old-k8s-version-456697 kubelet[5604]: E1006 01:42:40.115955    5604 pod_workers.go:191] Error syncing pod 96e0ee39-d033-4749-94de-7dc5895a0ba1 ("metrics-server-74d5856cc6-72rfp_kube-system(96e0ee39-d033-4749-94de-7dc5895a0ba1)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	
	* 
	* ==> kubernetes-dashboard [012a09b34f73] <==
	* 2023/10/06 01:41:21 Starting overwatch
	2023/10/06 01:41:21 Using namespace: kubernetes-dashboard
	2023/10/06 01:41:21 Using in-cluster config to connect to apiserver
	2023/10/06 01:41:21 Using secret token for csrf signing
	2023/10/06 01:41:21 Initializing csrf token from kubernetes-dashboard-csrf secret
	2023/10/06 01:41:21 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2023/10/06 01:41:21 Successful initial request to the apiserver, version: v1.16.0
	2023/10/06 01:41:21 Generating JWE encryption key
	2023/10/06 01:41:21 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2023/10/06 01:41:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2023/10/06 01:41:22 Initializing JWE encryption key from synchronized object
	2023/10/06 01:41:22 Creating in-cluster Sidecar client
	2023/10/06 01:41:22 Serving insecurely on HTTP port: 9090
	2023/10/06 01:41:22 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/10/06 01:41:52 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/10/06 01:42:22 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	* 
	* ==> storage-provisioner [c36592e1f934] <==
	* I1006 01:41:12.953920       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1006 01:41:12.991337       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1006 01:41:12.992781       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1006 01:41:13.012699       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1006 01:41:13.017572       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-456697_d8dbf35f-35d4-4c51-a8c7-cdad3eaeecbf!
	I1006 01:41:13.017514       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"11b24e48-1bb5-486b-b007-007510c87a03", APIVersion:"v1", ResourceVersion:"454", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-456697_d8dbf35f-35d4-4c51-a8c7-cdad3eaeecbf became leader
	I1006 01:41:13.123390       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-456697_d8dbf35f-35d4-4c51-a8c7-cdad3eaeecbf!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-456697 -n old-k8s-version-456697
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-456697 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-72rfp
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-456697 describe pod metrics-server-74d5856cc6-72rfp
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-456697 describe pod metrics-server-74d5856cc6-72rfp: exit status 1 (63.639116ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-72rfp" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-456697 describe pod metrics-server-74d5856cc6-72rfp: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (1.90s)

                                                
                                    

Test pass (287/320)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 8.26
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.08
10 TestDownloadOnly/v1.28.2/json-events 4.28
11 TestDownloadOnly/v1.28.2/preload-exists 0
15 TestDownloadOnly/v1.28.2/LogsDuration 0.08
16 TestDownloadOnly/DeleteAll 0.15
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.14
19 TestBinaryMirror 0.58
20 TestOffline 98.88
23 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
24 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
25 TestAddons/Setup 152.66
27 TestAddons/parallel/Registry 16.08
28 TestAddons/parallel/Ingress 23.82
29 TestAddons/parallel/InspektorGadget 10.74
30 TestAddons/parallel/MetricsServer 5.99
31 TestAddons/parallel/HelmTiller 12.47
33 TestAddons/parallel/CSI 61.88
34 TestAddons/parallel/Headlamp 16.52
35 TestAddons/parallel/CloudSpanner 5.73
36 TestAddons/parallel/LocalPath 57.13
37 TestAddons/parallel/NvidiaDevicePlugin 5.48
40 TestAddons/serial/GCPAuth/Namespaces 0.15
41 TestAddons/StoppedEnableDisable 13.43
42 TestCertOptions 62.22
43 TestCertExpiration 288.03
44 TestDockerFlags 61.73
45 TestForceSystemdFlag 57.24
46 TestForceSystemdEnv 69.39
48 TestKVMDriverInstallOrUpdate 4.53
52 TestErrorSpam/setup 48.2
53 TestErrorSpam/start 0.4
54 TestErrorSpam/status 0.79
55 TestErrorSpam/pause 1.2
56 TestErrorSpam/unpause 1.39
57 TestErrorSpam/stop 3.6
60 TestFunctional/serial/CopySyncFile 0
61 TestFunctional/serial/StartWithProxy 66.67
62 TestFunctional/serial/AuditLog 0
63 TestFunctional/serial/SoftStart 39.1
64 TestFunctional/serial/KubeContext 0.04
65 TestFunctional/serial/KubectlGetPods 0.08
68 TestFunctional/serial/CacheCmd/cache/add_remote 2.31
69 TestFunctional/serial/CacheCmd/cache/add_local 1.37
70 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
71 TestFunctional/serial/CacheCmd/cache/list 0.06
72 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.28
73 TestFunctional/serial/CacheCmd/cache/cache_reload 1.28
74 TestFunctional/serial/CacheCmd/cache/delete 0.13
75 TestFunctional/serial/MinikubeKubectlCmd 0.13
76 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
77 TestFunctional/serial/ExtraConfig 40.09
78 TestFunctional/serial/ComponentHealth 0.07
79 TestFunctional/serial/LogsCmd 1.08
80 TestFunctional/serial/LogsFileCmd 1.11
81 TestFunctional/serial/InvalidService 4.7
83 TestFunctional/parallel/ConfigCmd 0.46
84 TestFunctional/parallel/DashboardCmd 43.87
85 TestFunctional/parallel/DryRun 0.32
86 TestFunctional/parallel/InternationalLanguage 0.17
87 TestFunctional/parallel/StatusCmd 1.21
91 TestFunctional/parallel/ServiceCmdConnect 13.65
92 TestFunctional/parallel/AddonsCmd 0.17
93 TestFunctional/parallel/PersistentVolumeClaim 56.21
95 TestFunctional/parallel/SSHCmd 0.54
96 TestFunctional/parallel/CpCmd 1.05
97 TestFunctional/parallel/MySQL 40.21
98 TestFunctional/parallel/FileSync 0.24
99 TestFunctional/parallel/CertSync 1.63
103 TestFunctional/parallel/NodeLabels 0.08
105 TestFunctional/parallel/NonActiveRuntimeDisabled 0.27
107 TestFunctional/parallel/License 0.19
108 TestFunctional/parallel/Version/short 0.06
109 TestFunctional/parallel/Version/components 0.54
110 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
111 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
112 TestFunctional/parallel/ImageCommands/ImageListJson 0.26
113 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
114 TestFunctional/parallel/ImageCommands/ImageBuild 2.85
115 TestFunctional/parallel/ImageCommands/Setup 1.38
116 TestFunctional/parallel/DockerEnv/bash 1.03
117 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
118 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.11
119 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.12
120 TestFunctional/parallel/ServiceCmd/DeployApp 13.32
121 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.25
131 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.54
132 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.1
133 TestFunctional/parallel/ImageCommands/ImageSaveToFile 2.23
134 TestFunctional/parallel/ServiceCmd/List 0.42
135 TestFunctional/parallel/ServiceCmd/JSONOutput 0.33
136 TestFunctional/parallel/ServiceCmd/HTTPS 0.44
137 TestFunctional/parallel/ServiceCmd/Format 0.44
138 TestFunctional/parallel/ImageCommands/ImageRemove 0.82
139 TestFunctional/parallel/ServiceCmd/URL 0.45
140 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.65
141 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.04
142 TestFunctional/parallel/ProfileCmd/profile_not_create 0.41
143 TestFunctional/parallel/ProfileCmd/profile_list 0.33
144 TestFunctional/parallel/ProfileCmd/profile_json_output 0.34
145 TestFunctional/parallel/MountCmd/any-port 33.66
146 TestFunctional/parallel/MountCmd/specific-port 1.99
147 TestFunctional/parallel/MountCmd/VerifyCleanup 1.59
148 TestFunctional/delete_addon-resizer_images 0.07
149 TestFunctional/delete_my-image_image 0.01
150 TestFunctional/delete_minikube_cached_images 0.01
151 TestGvisorAddon 308.41
154 TestImageBuild/serial/Setup 48.49
155 TestImageBuild/serial/NormalBuild 1.5
156 TestImageBuild/serial/BuildWithBuildArg 1.25
157 TestImageBuild/serial/BuildWithDockerIgnore 0.39
158 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.3
161 TestIngressAddonLegacy/StartLegacyK8sCluster 77.77
163 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 17.46
164 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.61
165 TestIngressAddonLegacy/serial/ValidateIngressAddons 47.34
168 TestJSONOutput/start/Command 67.03
169 TestJSONOutput/start/Audit 0
171 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
172 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
174 TestJSONOutput/pause/Command 0.57
175 TestJSONOutput/pause/Audit 0
177 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
180 TestJSONOutput/unpause/Command 0.56
181 TestJSONOutput/unpause/Audit 0
183 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
186 TestJSONOutput/stop/Command 13.12
187 TestJSONOutput/stop/Audit 0
189 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
191 TestErrorJSONOutput 0.23
196 TestMainNoArgs 0.06
197 TestMinikubeProfile 109.53
200 TestMountStart/serial/StartWithMountFirst 28.7
201 TestMountStart/serial/VerifyMountFirst 0.41
202 TestMountStart/serial/StartWithMountSecond 29.19
203 TestMountStart/serial/VerifyMountSecond 0.43
204 TestMountStart/serial/DeleteFirst 0.71
205 TestMountStart/serial/VerifyMountPostDelete 0.42
206 TestMountStart/serial/Stop 2.1
207 TestMountStart/serial/RestartStopped 23.08
208 TestMountStart/serial/VerifyMountPostStop 0.42
211 TestMultiNode/serial/FreshStart2Nodes 126.43
212 TestMultiNode/serial/DeployApp2Nodes 5.86
213 TestMultiNode/serial/PingHostFrom2Pods 0.98
214 TestMultiNode/serial/AddNode 47.99
215 TestMultiNode/serial/ProfileList 0.22
216 TestMultiNode/serial/CopyFile 7.7
217 TestMultiNode/serial/StopNode 3.99
218 TestMultiNode/serial/StartAfterStop 31.18
219 TestMultiNode/serial/RestartKeepsNodes 179.57
220 TestMultiNode/serial/DeleteNode 1.75
221 TestMultiNode/serial/StopMultiNode 25.57
222 TestMultiNode/serial/RestartMultiNode 132.62
223 TestMultiNode/serial/ValidateNameConflict 52.85
228 TestPreload 200.83
230 TestScheduledStopUnix 120.73
231 TestSkaffold 139.67
234 TestRunningBinaryUpgrade 154.76
236 TestKubernetesUpgrade 264.04
249 TestStoppedBinaryUpgrade/Setup 0.5
250 TestStoppedBinaryUpgrade/Upgrade 202.12
251 TestStoppedBinaryUpgrade/MinikubeLogs 1.3
253 TestPause/serial/Start 93.62
261 TestPause/serial/SecondStartNoReconfiguration 56
263 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
264 TestNoKubernetes/serial/StartWithK8s 62.5
265 TestPause/serial/Pause 0.89
266 TestPause/serial/VerifyStatus 0.34
267 TestPause/serial/Unpause 0.68
268 TestPause/serial/PauseAgain 0.84
269 TestPause/serial/DeletePaused 1.26
270 TestPause/serial/VerifyDeletedResources 16.52
271 TestNetworkPlugins/group/auto/Start 77.57
272 TestNetworkPlugins/group/kindnet/Start 109.37
273 TestNoKubernetes/serial/StartWithStopK8s 61.51
274 TestNetworkPlugins/group/calico/Start 115.71
275 TestNoKubernetes/serial/Start 39.15
276 TestNetworkPlugins/group/auto/KubeletFlags 0.22
277 TestNetworkPlugins/group/auto/NetCatPod 11.32
278 TestNetworkPlugins/group/auto/DNS 0.22
279 TestNetworkPlugins/group/auto/Localhost 0.18
280 TestNetworkPlugins/group/auto/HairPin 0.22
281 TestNetworkPlugins/group/custom-flannel/Start 78.51
282 TestNoKubernetes/serial/VerifyK8sNotRunning 0.25
283 TestNoKubernetes/serial/ProfileList 1.34
284 TestNoKubernetes/serial/Stop 115.58
285 TestNetworkPlugins/group/kindnet/ControllerPod 5.03
286 TestNetworkPlugins/group/kindnet/KubeletFlags 0.23
287 TestNetworkPlugins/group/kindnet/NetCatPod 12.43
288 TestNetworkPlugins/group/kindnet/DNS 0.18
289 TestNetworkPlugins/group/kindnet/Localhost 0.17
290 TestNetworkPlugins/group/kindnet/HairPin 0.17
291 TestNetworkPlugins/group/false/Start 75.99
292 TestNetworkPlugins/group/calico/ControllerPod 5.03
293 TestNetworkPlugins/group/calico/KubeletFlags 0.25
294 TestNetworkPlugins/group/calico/NetCatPod 12.42
295 TestNetworkPlugins/group/calico/DNS 0.19
296 TestNetworkPlugins/group/calico/Localhost 0.2
297 TestNetworkPlugins/group/calico/HairPin 0.18
298 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.27
299 TestNetworkPlugins/group/custom-flannel/NetCatPod 15.47
300 TestNetworkPlugins/group/enable-default-cni/Start 79.74
301 TestNetworkPlugins/group/custom-flannel/DNS 0.17
302 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
303 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
304 TestNetworkPlugins/group/flannel/Start 91.41
305 TestNetworkPlugins/group/false/KubeletFlags 0.27
306 TestNetworkPlugins/group/false/NetCatPod 14.49
308 TestNetworkPlugins/group/false/DNS 20.95
309 TestNetworkPlugins/group/bridge/Start 85.58
310 TestNetworkPlugins/group/false/Localhost 0.88
311 TestNetworkPlugins/group/false/HairPin 0.24
312 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.26
313 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.46
314 TestNetworkPlugins/group/kubenet/Start 80.57
315 TestNetworkPlugins/group/enable-default-cni/DNS 0.22
316 TestNetworkPlugins/group/enable-default-cni/Localhost 0.2
317 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
319 TestStartStop/group/old-k8s-version/serial/FirstStart 144.7
320 TestNetworkPlugins/group/flannel/ControllerPod 5.03
321 TestNetworkPlugins/group/flannel/KubeletFlags 0.35
322 TestNetworkPlugins/group/flannel/NetCatPod 14.54
323 TestNetworkPlugins/group/flannel/DNS 0.21
324 TestNetworkPlugins/group/flannel/Localhost 0.15
325 TestNetworkPlugins/group/flannel/HairPin 0.16
326 TestNetworkPlugins/group/bridge/KubeletFlags 0.24
327 TestNetworkPlugins/group/bridge/NetCatPod 12.34
328 TestNetworkPlugins/group/bridge/DNS 0.19
329 TestNetworkPlugins/group/bridge/Localhost 0.17
330 TestNetworkPlugins/group/bridge/HairPin 0.17
332 TestStartStop/group/no-preload/serial/FirstStart 91.7
333 TestNetworkPlugins/group/kubenet/KubeletFlags 0.27
334 TestNetworkPlugins/group/kubenet/NetCatPod 13.47
336 TestStartStop/group/embed-certs/serial/FirstStart 120.36
337 TestNetworkPlugins/group/kubenet/DNS 0.18
338 TestNetworkPlugins/group/kubenet/Localhost 0.15
339 TestNetworkPlugins/group/kubenet/HairPin 0.15
341 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 116.59
342 TestStartStop/group/no-preload/serial/DeployApp 9.57
343 TestStartStop/group/old-k8s-version/serial/DeployApp 10.52
344 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.46
345 TestStartStop/group/no-preload/serial/Stop 13.19
346 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.83
347 TestStartStop/group/old-k8s-version/serial/Stop 13.16
348 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.22
349 TestStartStop/group/no-preload/serial/SecondStart 333.43
350 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.27
351 TestStartStop/group/old-k8s-version/serial/SecondStart 478.13
352 TestStartStop/group/embed-certs/serial/DeployApp 10.51
353 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.33
354 TestStartStop/group/embed-certs/serial/Stop 13.16
355 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.28
356 TestStartStop/group/embed-certs/serial/SecondStart 335.22
357 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.68
358 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.45
359 TestStartStop/group/default-k8s-diff-port/serial/Stop 13.17
360 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.26
361 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 314.44
362 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 15.04
363 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
364 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.32
365 TestStartStop/group/no-preload/serial/Pause 2.84
367 TestStartStop/group/newest-cni/serial/FirstStart 75.34
368 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 16.17
369 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 5.02
370 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.36
371 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.47
372 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.12
373 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.89
374 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.34
375 TestStartStop/group/embed-certs/serial/Pause 3.04
376 TestStartStop/group/newest-cni/serial/DeployApp 0
377 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.04
378 TestStartStop/group/newest-cni/serial/Stop 8.13
379 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
380 TestStartStop/group/newest-cni/serial/SecondStart 47.37
381 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.02
382 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
383 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
384 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.27
385 TestStartStop/group/newest-cni/serial/Pause 2.4
386 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
388 TestStartStop/group/old-k8s-version/serial/Pause 2.37
x
+
TestDownloadOnly/v1.16.0/json-events (8.26s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-884566 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-884566 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=kvm2 : (8.260180158s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (8.26s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-884566
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-884566: exit status 85 (78.475065ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-884566 | jenkins | v1.31.2 | 06 Oct 23 00:44 UTC |          |
	|         | -p download-only-884566        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/06 00:44:22
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 00:44:22.259801   75608 out.go:296] Setting OutFile to fd 1 ...
	I1006 00:44:22.259967   75608 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1006 00:44:22.259979   75608 out.go:309] Setting ErrFile to fd 2...
	I1006 00:44:22.259984   75608 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1006 00:44:22.260196   75608 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17314-68418/.minikube/bin
	W1006 00:44:22.260347   75608 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17314-68418/.minikube/config/config.json: open /home/jenkins/minikube-integration/17314-68418/.minikube/config/config.json: no such file or directory
	I1006 00:44:22.261114   75608 out.go:303] Setting JSON to true
	I1006 00:44:22.262066   75608 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":5215,"bootTime":1696547847,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1006 00:44:22.262135   75608 start.go:138] virtualization: kvm guest
	I1006 00:44:22.264807   75608 out.go:97] [download-only-884566] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1006 00:44:22.266564   75608 out.go:169] MINIKUBE_LOCATION=17314
	W1006 00:44:22.264959   75608 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17314-68418/.minikube/cache/preloaded-tarball: no such file or directory
	I1006 00:44:22.265041   75608 notify.go:220] Checking for updates...
	I1006 00:44:22.269537   75608 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 00:44:22.270921   75608 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17314-68418/kubeconfig
	I1006 00:44:22.272325   75608 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17314-68418/.minikube
	I1006 00:44:22.273724   75608 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1006 00:44:22.276208   75608 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1006 00:44:22.276505   75608 driver.go:378] Setting default libvirt URI to qemu:///system
	I1006 00:44:22.312915   75608 out.go:97] Using the kvm2 driver based on user configuration
	I1006 00:44:22.312985   75608 start.go:298] selected driver: kvm2
	I1006 00:44:22.312996   75608 start.go:902] validating driver "kvm2" against <nil>
	I1006 00:44:22.313403   75608 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 00:44:22.313505   75608 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17314-68418/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1006 00:44:22.329963   75608 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I1006 00:44:22.330043   75608 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1006 00:44:22.330730   75608 start_flags.go:386] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1006 00:44:22.330906   75608 start_flags.go:908] Wait components to verify : map[apiserver:true system_pods:true]
	I1006 00:44:22.330990   75608 cni.go:84] Creating CNI manager for ""
	I1006 00:44:22.331043   75608 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1006 00:44:22.331064   75608 start_flags.go:323] config:
	{Name:download-only-884566 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-884566 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1006 00:44:22.331349   75608 iso.go:125] acquiring lock: {Name:mk09b1b55bb2317f3231832cf8a32146ecf7bf7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 00:44:22.333620   75608 out.go:97] Downloading VM boot image ...
	I1006 00:44:22.333676   75608 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso.sha256 -> /home/jenkins/minikube-integration/17314-68418/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso
	I1006 00:44:24.756860   75608 out.go:97] Starting control plane node download-only-884566 in cluster download-only-884566
	I1006 00:44:24.756924   75608 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1006 00:44:24.778201   75608 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1006 00:44:24.778252   75608 cache.go:57] Caching tarball of preloaded images
	I1006 00:44:24.778424   75608 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1006 00:44:24.780304   75608 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1006 00:44:24.780326   75608 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1006 00:44:24.809140   75608 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /home/jenkins/minikube-integration/17314-68418/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-884566"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/json-events (4.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-884566 --force --alsologtostderr --kubernetes-version=v1.28.2 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-884566 --force --alsologtostderr --kubernetes-version=v1.28.2 --container-runtime=docker --driver=kvm2 : (4.282197245s)
--- PASS: TestDownloadOnly/v1.28.2/json-events (4.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/preload-exists
--- PASS: TestDownloadOnly/v1.28.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-884566
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-884566: exit status 85 (74.704805ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-884566 | jenkins | v1.31.2 | 06 Oct 23 00:44 UTC |          |
	|         | -p download-only-884566        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-884566 | jenkins | v1.31.2 | 06 Oct 23 00:44 UTC |          |
	|         | -p download-only-884566        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.2   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/06 00:44:30
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 00:44:30.599205   75654 out.go:296] Setting OutFile to fd 1 ...
	I1006 00:44:30.599467   75654 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1006 00:44:30.599476   75654 out.go:309] Setting ErrFile to fd 2...
	I1006 00:44:30.599481   75654 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1006 00:44:30.599682   75654 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17314-68418/.minikube/bin
	W1006 00:44:30.599798   75654 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17314-68418/.minikube/config/config.json: open /home/jenkins/minikube-integration/17314-68418/.minikube/config/config.json: no such file or directory
	I1006 00:44:30.600234   75654 out.go:303] Setting JSON to true
	I1006 00:44:30.601079   75654 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":5224,"bootTime":1696547847,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1006 00:44:30.601143   75654 start.go:138] virtualization: kvm guest
	I1006 00:44:30.603687   75654 out.go:97] [download-only-884566] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1006 00:44:30.605311   75654 out.go:169] MINIKUBE_LOCATION=17314
	I1006 00:44:30.603921   75654 notify.go:220] Checking for updates...
	I1006 00:44:30.608155   75654 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 00:44:30.609658   75654 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17314-68418/kubeconfig
	I1006 00:44:30.611123   75654 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17314-68418/.minikube
	I1006 00:44:30.612648   75654 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-884566"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-884566
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.58s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-731875 --alsologtostderr --binary-mirror http://127.0.0.1:39547 --driver=kvm2 
helpers_test.go:175: Cleaning up "binary-mirror-731875" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-731875
--- PASS: TestBinaryMirror (0.58s)

                                                
                                    
x
+
TestOffline (98.88s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-529377 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-529377 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 : (1m37.708331092s)
helpers_test.go:175: Cleaning up "offline-docker-529377" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-529377
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-529377: (1.168445091s)
--- PASS: TestOffline (98.88s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:927: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-672690
addons_test.go:927: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-672690: exit status 85 (65.534775ms)

                                                
                                                
-- stdout --
	* Profile "addons-672690" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-672690"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:938: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-672690
addons_test.go:938: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-672690: exit status 85 (64.906233ms)

                                                
                                                
-- stdout --
	* Profile "addons-672690" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-672690"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (152.66s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-672690 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-672690 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m32.656862909s)
--- PASS: TestAddons/Setup (152.66s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.08s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:329: registry stabilized in 16.885144ms
addons_test.go:331: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-g8ffb" [9114c597-154e-43ac-8aae-86d679869163] Running
addons_test.go:331: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.018810473s
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-tcnln" [cbf884c1-df4b-441f-9901-c1524497c11c] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.018538205s
addons_test.go:339: (dbg) Run:  kubectl --context addons-672690 delete po -l run=registry-test --now
addons_test.go:344: (dbg) Run:  kubectl --context addons-672690 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:344: (dbg) Done: kubectl --context addons-672690 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.08958534s)
addons_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p addons-672690 ip
2023/10/06 00:47:23 [DEBUG] GET http://192.168.39.187:5000
addons_test.go:387: (dbg) Run:  out/minikube-linux-amd64 -p addons-672690 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.08s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (23.82s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:206: (dbg) Run:  kubectl --context addons-672690 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:231: (dbg) Run:  kubectl --context addons-672690 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context addons-672690 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [9d2c4ce2-daac-42b1-8701-4a1e9bba8b92] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [9d2c4ce2-daac-42b1-8701-4a1e9bba8b92] Running
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.015345716s
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p addons-672690 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:285: (dbg) Run:  kubectl --context addons-672690 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p addons-672690 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.39.187
addons_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p addons-672690 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-amd64 -p addons-672690 addons disable ingress-dns --alsologtostderr -v=1: (1.687800926s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p addons-672690 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p addons-672690 addons disable ingress --alsologtostderr -v=1: (7.731186545s)
--- PASS: TestAddons/parallel/Ingress (23.82s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.74s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-2xvqg" [4495b3a4-7dbb-4149-910b-32c9f94ed9d9] Running
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.019425851s
addons_test.go:840: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-672690
addons_test.go:840: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-672690: (5.721675754s)
--- PASS: TestAddons/parallel/InspektorGadget (10.74s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.99s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:406: metrics-server stabilized in 11.97261ms
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-zfl9m" [8d174a1c-8fa1-46ef-95ec-a52149719bc4] Running
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.022649524s
addons_test.go:414: (dbg) Run:  kubectl --context addons-672690 top pods -n kube-system
addons_test.go:431: (dbg) Run:  out/minikube-linux-amd64 -p addons-672690 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.99s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (12.47s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:455: tiller-deploy stabilized in 4.458092ms
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-bxmrk" [4b91228e-32ad-4255-b91f-b343dadbb550] Running
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.015157285s
addons_test.go:472: (dbg) Run:  kubectl --context addons-672690 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:472: (dbg) Done: kubectl --context addons-672690 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (6.918873053s)
addons_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p addons-672690 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (12.47s)

                                                
                                    
x
+
TestAddons/parallel/CSI (61.88s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:560: csi-hostpath-driver pods stabilized in 6.220922ms
addons_test.go:563: (dbg) Run:  kubectl --context addons-672690 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-672690 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-672690 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-672690 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-672690 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-672690 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-672690 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-672690 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-672690 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-672690 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [f92c3f54-f223-4fec-8a8b-385121381fa5] Pending
helpers_test.go:344: "task-pv-pod" [f92c3f54-f223-4fec-8a8b-385121381fa5] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [f92c3f54-f223-4fec-8a8b-385121381fa5] Running
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 17.020417659s
addons_test.go:583: (dbg) Run:  kubectl --context addons-672690 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:588: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-672690 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-672690 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-672690 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:593: (dbg) Run:  kubectl --context addons-672690 delete pod task-pv-pod
addons_test.go:599: (dbg) Run:  kubectl --context addons-672690 delete pvc hpvc
addons_test.go:605: (dbg) Run:  kubectl --context addons-672690 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:610: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-672690 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-672690 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-672690 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-672690 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-672690 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-672690 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-672690 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-672690 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-672690 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-672690 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-672690 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-672690 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-672690 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-672690 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-672690 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-672690 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-672690 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-672690 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-672690 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-672690 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-672690 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:615: (dbg) Run:  kubectl --context addons-672690 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:620: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [d6c16203-7f25-4849-96cc-e62596225536] Pending
helpers_test.go:344: "task-pv-pod-restore" [d6c16203-7f25-4849-96cc-e62596225536] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [d6c16203-7f25-4849-96cc-e62596225536] Running
addons_test.go:620: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.021715536s
addons_test.go:625: (dbg) Run:  kubectl --context addons-672690 delete pod task-pv-pod-restore
addons_test.go:629: (dbg) Run:  kubectl --context addons-672690 delete pvc hpvc-restore
addons_test.go:633: (dbg) Run:  kubectl --context addons-672690 delete volumesnapshot new-snapshot-demo
addons_test.go:637: (dbg) Run:  out/minikube-linux-amd64 -p addons-672690 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:637: (dbg) Done: out/minikube-linux-amd64 -p addons-672690 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.704666914s)
addons_test.go:641: (dbg) Run:  out/minikube-linux-amd64 -p addons-672690 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (61.88s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.52s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:823: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-672690 --alsologtostderr -v=1
addons_test.go:823: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-672690 --alsologtostderr -v=1: (1.479697776s)
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-58b88cff49-jjg8r" [683df03d-ab44-4166-b6b4-b453aa49af51] Pending
helpers_test.go:344: "headlamp-58b88cff49-jjg8r" [683df03d-ab44-4166-b6b4-b453aa49af51] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-58b88cff49-jjg8r" [683df03d-ab44-4166-b6b4-b453aa49af51] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-58b88cff49-jjg8r" [683df03d-ab44-4166-b6b4-b453aa49af51] Running
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 15.040478679s
--- PASS: TestAddons/parallel/Headlamp (16.52s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.73s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-7d49f968d9-5pg6n" [fbed516f-a13f-4152-ab99-8d567c40cefa] Running
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.013589905s
addons_test.go:859: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-672690
--- PASS: TestAddons/parallel/CloudSpanner (5.73s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (57.13s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:872: (dbg) Run:  kubectl --context addons-672690 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:878: (dbg) Run:  kubectl --context addons-672690 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:882: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-672690 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-672690 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-672690 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-672690 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-672690 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-672690 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-672690 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-672690 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-672690 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [2dfa1be1-6793-4a78-921e-06f327820f86] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [2dfa1be1-6793-4a78-921e-06f327820f86] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [2dfa1be1-6793-4a78-921e-06f327820f86] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.013213314s
addons_test.go:890: (dbg) Run:  kubectl --context addons-672690 get pvc test-pvc -o=json
addons_test.go:899: (dbg) Run:  out/minikube-linux-amd64 -p addons-672690 ssh "cat /opt/local-path-provisioner/pvc-4300d6df-012e-4fc1-bb6f-bf60f5b99cd1_default_test-pvc/file1"
addons_test.go:911: (dbg) Run:  kubectl --context addons-672690 delete pod test-local-path
addons_test.go:915: (dbg) Run:  kubectl --context addons-672690 delete pvc test-pvc
addons_test.go:919: (dbg) Run:  out/minikube-linux-amd64 -p addons-672690 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:919: (dbg) Done: out/minikube-linux-amd64 -p addons-672690 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.347678412s)
--- PASS: TestAddons/parallel/LocalPath (57.13s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.48s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-rwh4m" [233f0fe7-71ab-42d8-962f-ce0ff1c573b4] Running
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.015596761s
addons_test.go:954: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-672690
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.48s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:649: (dbg) Run:  kubectl --context addons-672690 create ns new-namespace
addons_test.go:663: (dbg) Run:  kubectl --context addons-672690 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (13.43s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:171: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-672690
addons_test.go:171: (dbg) Done: out/minikube-linux-amd64 stop -p addons-672690: (13.114861055s)
addons_test.go:175: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-672690
addons_test.go:179: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-672690
addons_test.go:184: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-672690
--- PASS: TestAddons/StoppedEnableDisable (13.43s)

                                                
                                    
x
+
TestCertOptions (62.22s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-503269 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-503269 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 : (1m0.639198964s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-503269 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-503269 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-503269 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-503269" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-503269
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-503269: (1.085886954s)
--- PASS: TestCertOptions (62.22s)

                                                
                                    
x
+
TestCertExpiration (288.03s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-845025 --memory=2048 --cert-expiration=3m --driver=kvm2 
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-845025 --memory=2048 --cert-expiration=3m --driver=kvm2 : (1m16.046955819s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-845025 --memory=2048 --cert-expiration=8760h --driver=kvm2 
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-845025 --memory=2048 --cert-expiration=8760h --driver=kvm2 : (30.816291852s)
helpers_test.go:175: Cleaning up "cert-expiration-845025" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-845025
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-845025: (1.166630453s)
--- PASS: TestCertExpiration (288.03s)

                                                
                                    
x
+
TestDockerFlags (61.73s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-038159 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-038159 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 : (1m0.315111402s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-038159 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-038159 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-038159" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-038159
--- PASS: TestDockerFlags (61.73s)

                                                
                                    
x
+
TestForceSystemdFlag (57.24s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-566354 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-566354 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 : (55.741650531s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-566354 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-566354" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-566354
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-566354: (1.215439925s)
--- PASS: TestForceSystemdFlag (57.24s)

                                                
                                    
x
+
TestForceSystemdEnv (69.39s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-335417 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-335417 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 : (1m7.894328328s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-335417 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-335417" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-335417
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-335417: (1.053072796s)
--- PASS: TestForceSystemdEnv (69.39s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.53s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (4.53s)

                                                
                                    
x
+
TestErrorSpam/setup (48.2s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-120340 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-120340 --driver=kvm2 
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-120340 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-120340 --driver=kvm2 : (48.199415127s)
--- PASS: TestErrorSpam/setup (48.20s)

                                                
                                    
x
+
TestErrorSpam/start (0.4s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-120340 --log_dir /tmp/nospam-120340 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-120340 --log_dir /tmp/nospam-120340 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-120340 --log_dir /tmp/nospam-120340 start --dry-run
--- PASS: TestErrorSpam/start (0.40s)

                                                
                                    
x
+
TestErrorSpam/status (0.79s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-120340 --log_dir /tmp/nospam-120340 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-120340 --log_dir /tmp/nospam-120340 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-120340 --log_dir /tmp/nospam-120340 status
--- PASS: TestErrorSpam/status (0.79s)

                                                
                                    
x
+
TestErrorSpam/pause (1.2s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-120340 --log_dir /tmp/nospam-120340 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-120340 --log_dir /tmp/nospam-120340 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-120340 --log_dir /tmp/nospam-120340 pause
--- PASS: TestErrorSpam/pause (1.20s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.39s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-120340 --log_dir /tmp/nospam-120340 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-120340 --log_dir /tmp/nospam-120340 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-120340 --log_dir /tmp/nospam-120340 unpause
--- PASS: TestErrorSpam/unpause (1.39s)

                                                
                                    
x
+
TestErrorSpam/stop (3.6s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-120340 --log_dir /tmp/nospam-120340 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-120340 --log_dir /tmp/nospam-120340 stop: (3.43238497s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-120340 --log_dir /tmp/nospam-120340 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-120340 --log_dir /tmp/nospam-120340 stop
--- PASS: TestErrorSpam/stop (3.60s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17314-68418/.minikube/files/etc/test/nested/copy/75596/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (66.67s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-364725 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-364725 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 : (1m6.667616074s)
--- PASS: TestFunctional/serial/StartWithProxy (66.67s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (39.1s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-364725 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-364725 --alsologtostderr -v=8: (39.095805197s)
functional_test.go:659: soft start took 39.096437329s for "functional-364725" cluster.
--- PASS: TestFunctional/serial/SoftStart (39.10s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-364725 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-364725 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-364725 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-364725 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.37s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-364725 /tmp/TestFunctionalserialCacheCmdcacheadd_local1500082212/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-364725 cache add minikube-local-cache-test:functional-364725
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-364725 cache add minikube-local-cache-test:functional-364725: (1.045522377s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-364725 cache delete minikube-local-cache-test:functional-364725
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-364725
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.37s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-364725 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-364725 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-364725 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-364725 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (256.456099ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-364725 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-364725 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-364725 kubectl -- --context functional-364725 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-364725 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (40.09s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-364725 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1006 00:52:08.638770   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/addons-672690/client.crt: no such file or directory
E1006 00:52:08.644625   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/addons-672690/client.crt: no such file or directory
E1006 00:52:08.654976   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/addons-672690/client.crt: no such file or directory
E1006 00:52:08.675271   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/addons-672690/client.crt: no such file or directory
E1006 00:52:08.715561   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/addons-672690/client.crt: no such file or directory
E1006 00:52:08.795945   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/addons-672690/client.crt: no such file or directory
E1006 00:52:08.956419   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/addons-672690/client.crt: no such file or directory
E1006 00:52:09.277041   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/addons-672690/client.crt: no such file or directory
E1006 00:52:09.918013   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/addons-672690/client.crt: no such file or directory
E1006 00:52:11.198630   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/addons-672690/client.crt: no such file or directory
E1006 00:52:13.759171   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/addons-672690/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-364725 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (40.084907828s)
functional_test.go:757: restart took 40.085027731s for "functional-364725" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (40.09s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-364725 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-364725 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-364725 logs: (1.083812519s)
--- PASS: TestFunctional/serial/LogsCmd (1.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.11s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-364725 logs --file /tmp/TestFunctionalserialLogsFileCmd531688333/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-364725 logs --file /tmp/TestFunctionalserialLogsFileCmd531688333/001/logs.txt: (1.112378932s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.11s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.7s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-364725 apply -f testdata/invalidsvc.yaml
E1006 00:52:18.879736   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/addons-672690/client.crt: no such file or directory
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-364725
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-364725: exit status 115 (302.402492ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.32:30495 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-364725 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-364725 delete -f testdata/invalidsvc.yaml: (1.065937006s)
--- PASS: TestFunctional/serial/InvalidService (4.70s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-364725 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-364725 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-364725 config get cpus: exit status 14 (80.906122ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-364725 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-364725 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-364725 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-364725 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-364725 config get cpus: exit status 14 (65.997193ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (43.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-364725 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-364725 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 82039: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (43.87s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-364725 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-364725 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (165.803472ms)

                                                
                                                
-- stdout --
	* [functional-364725] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17314
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17314-68418/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17314-68418/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 00:52:39.282346   81743 out.go:296] Setting OutFile to fd 1 ...
	I1006 00:52:39.282657   81743 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1006 00:52:39.282669   81743 out.go:309] Setting ErrFile to fd 2...
	I1006 00:52:39.282677   81743 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1006 00:52:39.283018   81743 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17314-68418/.minikube/bin
	I1006 00:52:39.283753   81743 out.go:303] Setting JSON to false
	I1006 00:52:39.285036   81743 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":5712,"bootTime":1696547847,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1006 00:52:39.285138   81743 start.go:138] virtualization: kvm guest
	I1006 00:52:39.287607   81743 out.go:177] * [functional-364725] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1006 00:52:39.289144   81743 out.go:177]   - MINIKUBE_LOCATION=17314
	I1006 00:52:39.289172   81743 notify.go:220] Checking for updates...
	I1006 00:52:39.290835   81743 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 00:52:39.292539   81743 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17314-68418/kubeconfig
	I1006 00:52:39.294114   81743 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17314-68418/.minikube
	I1006 00:52:39.295483   81743 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1006 00:52:39.297010   81743 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 00:52:39.299009   81743 config.go:182] Loaded profile config "functional-364725": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1006 00:52:39.299596   81743 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1006 00:52:39.299687   81743 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 00:52:39.321896   81743 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46721
	I1006 00:52:39.322318   81743 main.go:141] libmachine: () Calling .GetVersion
	I1006 00:52:39.322961   81743 main.go:141] libmachine: Using API Version  1
	I1006 00:52:39.322997   81743 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 00:52:39.323347   81743 main.go:141] libmachine: () Calling .GetMachineName
	I1006 00:52:39.323555   81743 main.go:141] libmachine: (functional-364725) Calling .DriverName
	I1006 00:52:39.323814   81743 driver.go:378] Setting default libvirt URI to qemu:///system
	I1006 00:52:39.324122   81743 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1006 00:52:39.324172   81743 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 00:52:39.338857   81743 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32911
	I1006 00:52:39.339338   81743 main.go:141] libmachine: () Calling .GetVersion
	I1006 00:52:39.339816   81743 main.go:141] libmachine: Using API Version  1
	I1006 00:52:39.339841   81743 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 00:52:39.340194   81743 main.go:141] libmachine: () Calling .GetMachineName
	I1006 00:52:39.340408   81743 main.go:141] libmachine: (functional-364725) Calling .DriverName
	I1006 00:52:39.374122   81743 out.go:177] * Using the kvm2 driver based on existing profile
	I1006 00:52:39.375761   81743 start.go:298] selected driver: kvm2
	I1006 00:52:39.375783   81743 start.go:902] validating driver "kvm2" against &{Name:functional-364725 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.2 ClusterName:functional-364725 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.32 Port:8441 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Ce
rtExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1006 00:52:39.375932   81743 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 00:52:39.378214   81743 out.go:177] 
	W1006 00:52:39.379670   81743 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1006 00:52:39.381280   81743 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-364725 --dry-run --alsologtostderr -v=1 --driver=kvm2 
--- PASS: TestFunctional/parallel/DryRun (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-364725 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-364725 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (170.44179ms)

                                                
                                                
-- stdout --
	* [functional-364725] minikube v1.31.2 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17314
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17314-68418/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17314-68418/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 00:52:40.824362   81909 out.go:296] Setting OutFile to fd 1 ...
	I1006 00:52:40.824500   81909 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1006 00:52:40.824508   81909 out.go:309] Setting ErrFile to fd 2...
	I1006 00:52:40.824513   81909 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1006 00:52:40.824793   81909 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17314-68418/.minikube/bin
	I1006 00:52:40.825352   81909 out.go:303] Setting JSON to false
	I1006 00:52:40.826283   81909 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":5714,"bootTime":1696547847,"procs":232,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1006 00:52:40.826351   81909 start.go:138] virtualization: kvm guest
	I1006 00:52:40.828748   81909 out.go:177] * [functional-364725] minikube v1.31.2 sur Ubuntu 20.04 (kvm/amd64)
	I1006 00:52:40.830725   81909 out.go:177]   - MINIKUBE_LOCATION=17314
	I1006 00:52:40.830754   81909 notify.go:220] Checking for updates...
	I1006 00:52:40.832171   81909 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 00:52:40.833712   81909 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17314-68418/kubeconfig
	I1006 00:52:40.835191   81909 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17314-68418/.minikube
	I1006 00:52:40.836637   81909 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1006 00:52:40.838167   81909 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 00:52:40.840030   81909 config.go:182] Loaded profile config "functional-364725": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1006 00:52:40.840436   81909 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1006 00:52:40.840525   81909 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 00:52:40.855279   81909 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38007
	I1006 00:52:40.855725   81909 main.go:141] libmachine: () Calling .GetVersion
	I1006 00:52:40.856314   81909 main.go:141] libmachine: Using API Version  1
	I1006 00:52:40.856348   81909 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 00:52:40.856825   81909 main.go:141] libmachine: () Calling .GetMachineName
	I1006 00:52:40.857036   81909 main.go:141] libmachine: (functional-364725) Calling .DriverName
	I1006 00:52:40.857330   81909 driver.go:378] Setting default libvirt URI to qemu:///system
	I1006 00:52:40.857786   81909 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1006 00:52:40.857850   81909 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 00:52:40.873198   81909 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43423
	I1006 00:52:40.873623   81909 main.go:141] libmachine: () Calling .GetVersion
	I1006 00:52:40.874068   81909 main.go:141] libmachine: Using API Version  1
	I1006 00:52:40.874095   81909 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 00:52:40.874460   81909 main.go:141] libmachine: () Calling .GetMachineName
	I1006 00:52:40.874692   81909 main.go:141] libmachine: (functional-364725) Calling .DriverName
	I1006 00:52:40.908218   81909 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I1006 00:52:40.909789   81909 start.go:298] selected driver: kvm2
	I1006 00:52:40.909806   81909 start.go:902] validating driver "kvm2" against &{Name:functional-364725 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.2 ClusterName:functional-364725 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.32 Port:8441 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Ce
rtExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1006 00:52:40.909927   81909 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 00:52:40.912107   81909 out.go:177] 
	W1006 00:52:40.913433   81909 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1006 00:52:40.914822   81909 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-364725 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-364725 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-364725 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (13.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-364725 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-364725 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-5p7nc" [c4caad8e-c3cf-4958-ae0c-9a7e5c13e697] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-5p7nc" [c4caad8e-c3cf-4958-ae0c-9a7e5c13e697] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 13.020847881s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-364725 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.39.32:32715
functional_test.go:1674: http://192.168.39.32:32715: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-5p7nc

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.32:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.32:32715
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (13.65s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-364725 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-364725 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (56.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [f16cc58d-7db5-4729-9bff-dba3966119bd] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.018286978s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-364725 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-364725 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-364725 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-364725 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-364725 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [34745cb0-08c2-4dad-8737-eb64ab0467e4] Pending
helpers_test.go:344: "sp-pod" [34745cb0-08c2-4dad-8737-eb64ab0467e4] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [34745cb0-08c2-4dad-8737-eb64ab0467e4] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 18.029127615s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-364725 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-364725 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-364725 delete -f testdata/storage-provisioner/pod.yaml: (1.158621989s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-364725 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [c728e6e2-b4e9-431a-8f75-7e87e1d5384f] Pending
helpers_test.go:344: "sp-pod" [c728e6e2-b4e9-431a-8f75-7e87e1d5384f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [c728e6e2-b4e9-431a-8f75-7e87e1d5384f] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 29.018475639s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-364725 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (56.21s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-364725 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-364725 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-364725 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-364725 ssh -n functional-364725 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-364725 cp functional-364725:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3847822652/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-364725 ssh -n functional-364725 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (40.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-364725 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-mp2kc" [fbd4b85c-6dc9-48ad-873d-36237c04d54e] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-mp2kc" [fbd4b85c-6dc9-48ad-873d-36237c04d54e] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 31.029685009s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-364725 exec mysql-859648c796-mp2kc -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-364725 exec mysql-859648c796-mp2kc -- mysql -ppassword -e "show databases;": exit status 1 (278.365604ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-364725 exec mysql-859648c796-mp2kc -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-364725 exec mysql-859648c796-mp2kc -- mysql -ppassword -e "show databases;": exit status 1 (343.337179ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-364725 exec mysql-859648c796-mp2kc -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-364725 exec mysql-859648c796-mp2kc -- mysql -ppassword -e "show databases;": exit status 1 (299.230958ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-364725 exec mysql-859648c796-mp2kc -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-364725 exec mysql-859648c796-mp2kc -- mysql -ppassword -e "show databases;": exit status 1 (220.700933ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-364725 exec mysql-859648c796-mp2kc -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (40.21s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/75596/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-364725 ssh "sudo cat /etc/test/nested/copy/75596/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/75596.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-364725 ssh "sudo cat /etc/ssl/certs/75596.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/75596.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-364725 ssh "sudo cat /usr/share/ca-certificates/75596.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-364725 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/755962.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-364725 ssh "sudo cat /etc/ssl/certs/755962.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/755962.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-364725 ssh "sudo cat /usr/share/ca-certificates/755962.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-364725 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.63s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-364725 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-364725 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-364725 ssh "sudo systemctl is-active crio": exit status 1 (271.109476ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-364725 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-364725 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-364725 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-364725 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.2
registry.k8s.io/kube-proxy:v1.28.2
registry.k8s.io/kube-controller-manager:v1.28.2
registry.k8s.io/kube-apiserver:v1.28.2
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-364725
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-364725
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-364725 image ls --format short --alsologtostderr:
I1006 00:53:20.068592   82804 out.go:296] Setting OutFile to fd 1 ...
I1006 00:53:20.068709   82804 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1006 00:53:20.068720   82804 out.go:309] Setting ErrFile to fd 2...
I1006 00:53:20.068727   82804 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1006 00:53:20.068927   82804 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17314-68418/.minikube/bin
I1006 00:53:20.069566   82804 config.go:182] Loaded profile config "functional-364725": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1006 00:53:20.069661   82804 config.go:182] Loaded profile config "functional-364725": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1006 00:53:20.070036   82804 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1006 00:53:20.070083   82804 main.go:141] libmachine: Launching plugin server for driver kvm2
I1006 00:53:20.087793   82804 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38507
I1006 00:53:20.088297   82804 main.go:141] libmachine: () Calling .GetVersion
I1006 00:53:20.088883   82804 main.go:141] libmachine: Using API Version  1
I1006 00:53:20.088915   82804 main.go:141] libmachine: () Calling .SetConfigRaw
I1006 00:53:20.089312   82804 main.go:141] libmachine: () Calling .GetMachineName
I1006 00:53:20.089504   82804 main.go:141] libmachine: (functional-364725) Calling .GetState
I1006 00:53:20.091449   82804 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1006 00:53:20.091494   82804 main.go:141] libmachine: Launching plugin server for driver kvm2
I1006 00:53:20.106183   82804 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34793
I1006 00:53:20.106634   82804 main.go:141] libmachine: () Calling .GetVersion
I1006 00:53:20.107174   82804 main.go:141] libmachine: Using API Version  1
I1006 00:53:20.107218   82804 main.go:141] libmachine: () Calling .SetConfigRaw
I1006 00:53:20.107574   82804 main.go:141] libmachine: () Calling .GetMachineName
I1006 00:53:20.107820   82804 main.go:141] libmachine: (functional-364725) Calling .DriverName
I1006 00:53:20.108059   82804 ssh_runner.go:195] Run: systemctl --version
I1006 00:53:20.108089   82804 main.go:141] libmachine: (functional-364725) Calling .GetSSHHostname
I1006 00:53:20.110917   82804 main.go:141] libmachine: (functional-364725) DBG | domain functional-364725 has defined MAC address 52:54:00:4b:e9:f5 in network mk-functional-364725
I1006 00:53:20.111353   82804 main.go:141] libmachine: (functional-364725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:e9:f5", ip: ""} in network mk-functional-364725: {Iface:virbr1 ExpiryTime:2023-10-06 01:49:59 +0000 UTC Type:0 Mac:52:54:00:4b:e9:f5 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:functional-364725 Clientid:01:52:54:00:4b:e9:f5}
I1006 00:53:20.111384   82804 main.go:141] libmachine: (functional-364725) DBG | domain functional-364725 has defined IP address 192.168.39.32 and MAC address 52:54:00:4b:e9:f5 in network mk-functional-364725
I1006 00:53:20.111551   82804 main.go:141] libmachine: (functional-364725) Calling .GetSSHPort
I1006 00:53:20.111779   82804 main.go:141] libmachine: (functional-364725) Calling .GetSSHKeyPath
I1006 00:53:20.111982   82804 main.go:141] libmachine: (functional-364725) Calling .GetSSHUsername
I1006 00:53:20.112142   82804 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17314-68418/.minikube/machines/functional-364725/id_rsa Username:docker}
I1006 00:53:20.208749   82804 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1006 00:53:20.240735   82804 main.go:141] libmachine: Making call to close driver server
I1006 00:53:20.240747   82804 main.go:141] libmachine: (functional-364725) Calling .Close
I1006 00:53:20.241066   82804 main.go:141] libmachine: (functional-364725) DBG | Closing plugin on server side
I1006 00:53:20.241140   82804 main.go:141] libmachine: Successfully made call to close driver server
I1006 00:53:20.241164   82804 main.go:141] libmachine: Making call to close connection to plugin binary
I1006 00:53:20.241175   82804 main.go:141] libmachine: Making call to close driver server
I1006 00:53:20.241186   82804 main.go:141] libmachine: (functional-364725) Calling .Close
I1006 00:53:20.241419   82804 main.go:141] libmachine: (functional-364725) DBG | Closing plugin on server side
I1006 00:53:20.241443   82804 main.go:141] libmachine: Successfully made call to close driver server
I1006 00:53:20.241463   82804 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-364725 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-364725 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/minikube-local-cache-test | functional-364725 | e38208064cb66 | 30B    |
| registry.k8s.io/coredns/coredns             | v1.10.1           | ead0a4a53df89 | 53.6MB |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/kube-apiserver              | v1.28.2           | cdcab12b2dd16 | 126MB  |
| registry.k8s.io/kube-proxy                  | v1.28.2           | c120fed2beb84 | 73.1MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| gcr.io/google-containers/addon-resizer      | functional-364725 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/etcd                        | 3.5.9-0           | 73deb9a3f7025 | 294MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| docker.io/library/nginx                     | latest            | 61395b4c586da | 187MB  |
| registry.k8s.io/kube-scheduler              | v1.28.2           | 7a5d9d67a13f6 | 60.1MB |
| registry.k8s.io/kube-controller-manager     | v1.28.2           | 55f13c92defb1 | 122MB  |
| docker.io/library/mysql                     | 5.7               | a5b7ceed40749 | 581MB  |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-364725 image ls --format table --alsologtostderr:
I1006 00:53:21.312158   83059 out.go:296] Setting OutFile to fd 1 ...
I1006 00:53:21.312283   83059 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1006 00:53:21.312292   83059 out.go:309] Setting ErrFile to fd 2...
I1006 00:53:21.312297   83059 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1006 00:53:21.312500   83059 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17314-68418/.minikube/bin
I1006 00:53:21.313074   83059 config.go:182] Loaded profile config "functional-364725": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1006 00:53:21.313176   83059 config.go:182] Loaded profile config "functional-364725": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1006 00:53:21.313566   83059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1006 00:53:21.313625   83059 main.go:141] libmachine: Launching plugin server for driver kvm2
I1006 00:53:21.328625   83059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45149
I1006 00:53:21.329111   83059 main.go:141] libmachine: () Calling .GetVersion
I1006 00:53:21.329725   83059 main.go:141] libmachine: Using API Version  1
I1006 00:53:21.329756   83059 main.go:141] libmachine: () Calling .SetConfigRaw
I1006 00:53:21.330084   83059 main.go:141] libmachine: () Calling .GetMachineName
I1006 00:53:21.330297   83059 main.go:141] libmachine: (functional-364725) Calling .GetState
I1006 00:53:21.332226   83059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1006 00:53:21.332277   83059 main.go:141] libmachine: Launching plugin server for driver kvm2
I1006 00:53:21.346635   83059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35093
I1006 00:53:21.347031   83059 main.go:141] libmachine: () Calling .GetVersion
I1006 00:53:21.347473   83059 main.go:141] libmachine: Using API Version  1
I1006 00:53:21.347497   83059 main.go:141] libmachine: () Calling .SetConfigRaw
I1006 00:53:21.347807   83059 main.go:141] libmachine: () Calling .GetMachineName
I1006 00:53:21.347988   83059 main.go:141] libmachine: (functional-364725) Calling .DriverName
I1006 00:53:21.348181   83059 ssh_runner.go:195] Run: systemctl --version
I1006 00:53:21.348214   83059 main.go:141] libmachine: (functional-364725) Calling .GetSSHHostname
I1006 00:53:21.350735   83059 main.go:141] libmachine: (functional-364725) DBG | domain functional-364725 has defined MAC address 52:54:00:4b:e9:f5 in network mk-functional-364725
I1006 00:53:21.351187   83059 main.go:141] libmachine: (functional-364725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:e9:f5", ip: ""} in network mk-functional-364725: {Iface:virbr1 ExpiryTime:2023-10-06 01:49:59 +0000 UTC Type:0 Mac:52:54:00:4b:e9:f5 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:functional-364725 Clientid:01:52:54:00:4b:e9:f5}
I1006 00:53:21.351230   83059 main.go:141] libmachine: (functional-364725) DBG | domain functional-364725 has defined IP address 192.168.39.32 and MAC address 52:54:00:4b:e9:f5 in network mk-functional-364725
I1006 00:53:21.351322   83059 main.go:141] libmachine: (functional-364725) Calling .GetSSHPort
I1006 00:53:21.351468   83059 main.go:141] libmachine: (functional-364725) Calling .GetSSHKeyPath
I1006 00:53:21.351586   83059 main.go:141] libmachine: (functional-364725) Calling .GetSSHUsername
I1006 00:53:21.351753   83059 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17314-68418/.minikube/machines/functional-364725/id_rsa Username:docker}
I1006 00:53:21.444789   83059 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1006 00:53:21.468441   83059 main.go:141] libmachine: Making call to close driver server
I1006 00:53:21.468462   83059 main.go:141] libmachine: (functional-364725) Calling .Close
I1006 00:53:21.468771   83059 main.go:141] libmachine: Successfully made call to close driver server
I1006 00:53:21.468827   83059 main.go:141] libmachine: Making call to close connection to plugin binary
I1006 00:53:21.468834   83059 main.go:141] libmachine: (functional-364725) DBG | Closing plugin on server side
I1006 00:53:21.468847   83059 main.go:141] libmachine: Making call to close driver server
I1006 00:53:21.468860   83059 main.go:141] libmachine: (functional-364725) Calling .Close
I1006 00:53:21.469120   83059 main.go:141] libmachine: (functional-364725) DBG | Closing plugin on server side
I1006 00:53:21.469132   83059 main.go:141] libmachine: Successfully made call to close driver server
I1006 00:53:21.469152   83059 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-364725 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-364725 image ls --format json --alsologtostderr:
[{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"61395b4c586da2b9b3b7ca903ea6a448e6783dfdd7f768ff2c1a0f3360aaba99","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"187000000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-364725"],"size":"32900000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"cdcab12b2dd16cce4efc5dd43c082469364f1
9ad978e922d110b74a42eff7cce","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.2"],"size":"126000000"},{"id":"55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.2"],"size":"122000000"},{"id":"7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.2"],"size":"60100000"},{"id":"a5b7ceed4074932a04ea553af3124bb03b249affe14899e2cd746d1a63e12ecc","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"581000000"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"294000000"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53600000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTa
gs":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"e38208064cb669861a691a2ceadb29d72ccf11172feda0e26cc438cd9e9d794b","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-364725"],"size":"30"},{"id":"c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.2"],"size":"73100000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"
],"size":"4400000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-364725 image ls --format json --alsologtostderr:
I1006 00:53:21.060393   83036 out.go:296] Setting OutFile to fd 1 ...
I1006 00:53:21.060531   83036 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1006 00:53:21.060537   83036 out.go:309] Setting ErrFile to fd 2...
I1006 00:53:21.060544   83036 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1006 00:53:21.060822   83036 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17314-68418/.minikube/bin
I1006 00:53:21.061625   83036 config.go:182] Loaded profile config "functional-364725": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1006 00:53:21.061776   83036 config.go:182] Loaded profile config "functional-364725": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1006 00:53:21.062324   83036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1006 00:53:21.062381   83036 main.go:141] libmachine: Launching plugin server for driver kvm2
I1006 00:53:21.077794   83036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39749
I1006 00:53:21.078354   83036 main.go:141] libmachine: () Calling .GetVersion
I1006 00:53:21.079120   83036 main.go:141] libmachine: Using API Version  1
I1006 00:53:21.079148   83036 main.go:141] libmachine: () Calling .SetConfigRaw
I1006 00:53:21.079592   83036 main.go:141] libmachine: () Calling .GetMachineName
I1006 00:53:21.079810   83036 main.go:141] libmachine: (functional-364725) Calling .GetState
I1006 00:53:21.081956   83036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1006 00:53:21.082003   83036 main.go:141] libmachine: Launching plugin server for driver kvm2
I1006 00:53:21.098760   83036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45185
I1006 00:53:21.099266   83036 main.go:141] libmachine: () Calling .GetVersion
I1006 00:53:21.099797   83036 main.go:141] libmachine: Using API Version  1
I1006 00:53:21.099821   83036 main.go:141] libmachine: () Calling .SetConfigRaw
I1006 00:53:21.100156   83036 main.go:141] libmachine: () Calling .GetMachineName
I1006 00:53:21.100368   83036 main.go:141] libmachine: (functional-364725) Calling .DriverName
I1006 00:53:21.100660   83036 ssh_runner.go:195] Run: systemctl --version
I1006 00:53:21.100697   83036 main.go:141] libmachine: (functional-364725) Calling .GetSSHHostname
I1006 00:53:21.103808   83036 main.go:141] libmachine: (functional-364725) DBG | domain functional-364725 has defined MAC address 52:54:00:4b:e9:f5 in network mk-functional-364725
I1006 00:53:21.104238   83036 main.go:141] libmachine: (functional-364725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:e9:f5", ip: ""} in network mk-functional-364725: {Iface:virbr1 ExpiryTime:2023-10-06 01:49:59 +0000 UTC Type:0 Mac:52:54:00:4b:e9:f5 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:functional-364725 Clientid:01:52:54:00:4b:e9:f5}
I1006 00:53:21.104275   83036 main.go:141] libmachine: (functional-364725) DBG | domain functional-364725 has defined IP address 192.168.39.32 and MAC address 52:54:00:4b:e9:f5 in network mk-functional-364725
I1006 00:53:21.104441   83036 main.go:141] libmachine: (functional-364725) Calling .GetSSHPort
I1006 00:53:21.104635   83036 main.go:141] libmachine: (functional-364725) Calling .GetSSHKeyPath
I1006 00:53:21.104832   83036 main.go:141] libmachine: (functional-364725) Calling .GetSSHUsername
I1006 00:53:21.104989   83036 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17314-68418/.minikube/machines/functional-364725/id_rsa Username:docker}
I1006 00:53:21.220826   83036 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1006 00:53:21.248152   83036 main.go:141] libmachine: Making call to close driver server
I1006 00:53:21.248167   83036 main.go:141] libmachine: (functional-364725) Calling .Close
I1006 00:53:21.248449   83036 main.go:141] libmachine: (functional-364725) DBG | Closing plugin on server side
I1006 00:53:21.248472   83036 main.go:141] libmachine: Successfully made call to close driver server
I1006 00:53:21.248489   83036 main.go:141] libmachine: Making call to close connection to plugin binary
I1006 00:53:21.248506   83036 main.go:141] libmachine: Making call to close driver server
I1006 00:53:21.248515   83036 main.go:141] libmachine: (functional-364725) Calling .Close
I1006 00:53:21.248748   83036 main.go:141] libmachine: Successfully made call to close driver server
I1006 00:53:21.248760   83036 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-364725 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-364725 image ls --format yaml --alsologtostderr:
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.2
size: "126000000"
- id: a5b7ceed4074932a04ea553af3124bb03b249affe14899e2cd746d1a63e12ecc
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "581000000"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "294000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: e38208064cb669861a691a2ceadb29d72ccf11172feda0e26cc438cd9e9d794b
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-364725
size: "30"
- id: 61395b4c586da2b9b3b7ca903ea6a448e6783dfdd7f768ff2c1a0f3360aaba99
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "187000000"
- id: 7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.2
size: "60100000"
- id: c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.2
size: "73100000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.2
size: "122000000"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53600000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-364725
size: "32900000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-364725 image ls --format yaml --alsologtostderr:
I1006 00:53:20.306478   82828 out.go:296] Setting OutFile to fd 1 ...
I1006 00:53:20.306744   82828 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1006 00:53:20.306759   82828 out.go:309] Setting ErrFile to fd 2...
I1006 00:53:20.306766   82828 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1006 00:53:20.307118   82828 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17314-68418/.minikube/bin
I1006 00:53:20.308100   82828 config.go:182] Loaded profile config "functional-364725": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1006 00:53:20.308267   82828 config.go:182] Loaded profile config "functional-364725": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1006 00:53:20.308873   82828 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1006 00:53:20.308957   82828 main.go:141] libmachine: Launching plugin server for driver kvm2
I1006 00:53:20.324494   82828 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33555
I1006 00:53:20.324962   82828 main.go:141] libmachine: () Calling .GetVersion
I1006 00:53:20.325648   82828 main.go:141] libmachine: Using API Version  1
I1006 00:53:20.325689   82828 main.go:141] libmachine: () Calling .SetConfigRaw
I1006 00:53:20.326050   82828 main.go:141] libmachine: () Calling .GetMachineName
I1006 00:53:20.326230   82828 main.go:141] libmachine: (functional-364725) Calling .GetState
I1006 00:53:20.328430   82828 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1006 00:53:20.328490   82828 main.go:141] libmachine: Launching plugin server for driver kvm2
I1006 00:53:20.343402   82828 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41029
I1006 00:53:20.343844   82828 main.go:141] libmachine: () Calling .GetVersion
I1006 00:53:20.344276   82828 main.go:141] libmachine: Using API Version  1
I1006 00:53:20.344302   82828 main.go:141] libmachine: () Calling .SetConfigRaw
I1006 00:53:20.344673   82828 main.go:141] libmachine: () Calling .GetMachineName
I1006 00:53:20.344838   82828 main.go:141] libmachine: (functional-364725) Calling .DriverName
I1006 00:53:20.345056   82828 ssh_runner.go:195] Run: systemctl --version
I1006 00:53:20.345082   82828 main.go:141] libmachine: (functional-364725) Calling .GetSSHHostname
I1006 00:53:20.347620   82828 main.go:141] libmachine: (functional-364725) DBG | domain functional-364725 has defined MAC address 52:54:00:4b:e9:f5 in network mk-functional-364725
I1006 00:53:20.348047   82828 main.go:141] libmachine: (functional-364725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:e9:f5", ip: ""} in network mk-functional-364725: {Iface:virbr1 ExpiryTime:2023-10-06 01:49:59 +0000 UTC Type:0 Mac:52:54:00:4b:e9:f5 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:functional-364725 Clientid:01:52:54:00:4b:e9:f5}
I1006 00:53:20.348074   82828 main.go:141] libmachine: (functional-364725) DBG | domain functional-364725 has defined IP address 192.168.39.32 and MAC address 52:54:00:4b:e9:f5 in network mk-functional-364725
I1006 00:53:20.348268   82828 main.go:141] libmachine: (functional-364725) Calling .GetSSHPort
I1006 00:53:20.348464   82828 main.go:141] libmachine: (functional-364725) Calling .GetSSHKeyPath
I1006 00:53:20.348625   82828 main.go:141] libmachine: (functional-364725) Calling .GetSSHUsername
I1006 00:53:20.348743   82828 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17314-68418/.minikube/machines/functional-364725/id_rsa Username:docker}
I1006 00:53:20.443131   82828 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1006 00:53:20.468659   82828 main.go:141] libmachine: Making call to close driver server
I1006 00:53:20.468678   82828 main.go:141] libmachine: (functional-364725) Calling .Close
I1006 00:53:20.468993   82828 main.go:141] libmachine: Successfully made call to close driver server
I1006 00:53:20.469014   82828 main.go:141] libmachine: Making call to close connection to plugin binary
I1006 00:53:20.469024   82828 main.go:141] libmachine: Making call to close driver server
I1006 00:53:20.469031   82828 main.go:141] libmachine: (functional-364725) Calling .Close
I1006 00:53:20.469356   82828 main.go:141] libmachine: Successfully made call to close driver server
I1006 00:53:20.469364   82828 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-364725 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-364725 ssh pgrep buildkitd: exit status 1 (221.846661ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-364725 image build -t localhost/my-image:functional-364725 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-364725 image build -t localhost/my-image:functional-364725 testdata/build --alsologtostderr: (2.372909052s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-364725 image build -t localhost/my-image:functional-364725 testdata/build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in b393bf1c2c78
Removing intermediate container b393bf1c2c78
---> 8cbd9c56dbd6
Step 3/3 : ADD content.txt /
---> 62e7242ebfa6
Successfully built 62e7242ebfa6
Successfully tagged localhost/my-image:functional-364725
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-364725 image build -t localhost/my-image:functional-364725 testdata/build --alsologtostderr:
I1006 00:53:20.784242   82912 out.go:296] Setting OutFile to fd 1 ...
I1006 00:53:20.784510   82912 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1006 00:53:20.784537   82912 out.go:309] Setting ErrFile to fd 2...
I1006 00:53:20.784549   82912 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1006 00:53:20.784797   82912 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17314-68418/.minikube/bin
I1006 00:53:20.785597   82912 config.go:182] Loaded profile config "functional-364725": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1006 00:53:20.786191   82912 config.go:182] Loaded profile config "functional-364725": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1006 00:53:20.786757   82912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1006 00:53:20.786840   82912 main.go:141] libmachine: Launching plugin server for driver kvm2
I1006 00:53:20.818317   82912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33811
I1006 00:53:20.819527   82912 main.go:141] libmachine: () Calling .GetVersion
I1006 00:53:20.820338   82912 main.go:141] libmachine: Using API Version  1
I1006 00:53:20.820379   82912 main.go:141] libmachine: () Calling .SetConfigRaw
I1006 00:53:20.820778   82912 main.go:141] libmachine: () Calling .GetMachineName
I1006 00:53:20.820978   82912 main.go:141] libmachine: (functional-364725) Calling .GetState
I1006 00:53:20.823001   82912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1006 00:53:20.823037   82912 main.go:141] libmachine: Launching plugin server for driver kvm2
I1006 00:53:20.842119   82912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44951
I1006 00:53:20.842722   82912 main.go:141] libmachine: () Calling .GetVersion
I1006 00:53:20.843387   82912 main.go:141] libmachine: Using API Version  1
I1006 00:53:20.843419   82912 main.go:141] libmachine: () Calling .SetConfigRaw
I1006 00:53:20.843806   82912 main.go:141] libmachine: () Calling .GetMachineName
I1006 00:53:20.843997   82912 main.go:141] libmachine: (functional-364725) Calling .DriverName
I1006 00:53:20.844194   82912 ssh_runner.go:195] Run: systemctl --version
I1006 00:53:20.844223   82912 main.go:141] libmachine: (functional-364725) Calling .GetSSHHostname
I1006 00:53:20.848197   82912 main.go:141] libmachine: (functional-364725) DBG | domain functional-364725 has defined MAC address 52:54:00:4b:e9:f5 in network mk-functional-364725
I1006 00:53:20.848641   82912 main.go:141] libmachine: (functional-364725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:e9:f5", ip: ""} in network mk-functional-364725: {Iface:virbr1 ExpiryTime:2023-10-06 01:49:59 +0000 UTC Type:0 Mac:52:54:00:4b:e9:f5 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:functional-364725 Clientid:01:52:54:00:4b:e9:f5}
I1006 00:53:20.848686   82912 main.go:141] libmachine: (functional-364725) DBG | domain functional-364725 has defined IP address 192.168.39.32 and MAC address 52:54:00:4b:e9:f5 in network mk-functional-364725
I1006 00:53:20.849024   82912 main.go:141] libmachine: (functional-364725) Calling .GetSSHPort
I1006 00:53:20.849207   82912 main.go:141] libmachine: (functional-364725) Calling .GetSSHKeyPath
I1006 00:53:20.849372   82912 main.go:141] libmachine: (functional-364725) Calling .GetSSHUsername
I1006 00:53:20.849539   82912 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17314-68418/.minikube/machines/functional-364725/id_rsa Username:docker}
I1006 00:53:20.963289   82912 build_images.go:151] Building image from path: /tmp/build.245916680.tar
I1006 00:53:20.963367   82912 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1006 00:53:20.983336   82912 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.245916680.tar
I1006 00:53:20.993534   82912 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.245916680.tar: stat -c "%s %y" /var/lib/minikube/build/build.245916680.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.245916680.tar': No such file or directory
I1006 00:53:20.993595   82912 ssh_runner.go:362] scp /tmp/build.245916680.tar --> /var/lib/minikube/build/build.245916680.tar (3072 bytes)
I1006 00:53:21.042677   82912 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.245916680
I1006 00:53:21.053502   82912 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.245916680 -xf /var/lib/minikube/build/build.245916680.tar
I1006 00:53:21.078282   82912 docker.go:341] Building image: /var/lib/minikube/build/build.245916680
I1006 00:53:21.078355   82912 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-364725 /var/lib/minikube/build/build.245916680
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I1006 00:53:23.027960   82912 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-364725 /var/lib/minikube/build/build.245916680: (1.949571738s)
I1006 00:53:23.028043   82912 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.245916680
I1006 00:53:23.042319   82912 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.245916680.tar
I1006 00:53:23.063459   82912 build_images.go:207] Built localhost/my-image:functional-364725 from /tmp/build.245916680.tar
I1006 00:53:23.063509   82912 build_images.go:123] succeeded building to: functional-364725
I1006 00:53:23.063516   82912 build_images.go:124] failed building to: 
I1006 00:53:23.063555   82912 main.go:141] libmachine: Making call to close driver server
I1006 00:53:23.063568   82912 main.go:141] libmachine: (functional-364725) Calling .Close
I1006 00:53:23.063930   82912 main.go:141] libmachine: (functional-364725) DBG | Closing plugin on server side
I1006 00:53:23.063928   82912 main.go:141] libmachine: Successfully made call to close driver server
I1006 00:53:23.063959   82912 main.go:141] libmachine: Making call to close connection to plugin binary
I1006 00:53:23.063975   82912 main.go:141] libmachine: Making call to close driver server
I1006 00:53:23.063986   82912 main.go:141] libmachine: (functional-364725) Calling .Close
I1006 00:53:23.064259   82912 main.go:141] libmachine: (functional-364725) DBG | Closing plugin on server side
I1006 00:53:23.064298   82912 main.go:141] libmachine: Successfully made call to close driver server
I1006 00:53:23.064317   82912 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-364725 image ls
2023/10/06 00:53:24 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.35072162s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-364725
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-364725 docker-env) && out/minikube-linux-amd64 status -p functional-364725"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-364725 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-364725 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-364725 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-364725 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (13.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-364725 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-364725 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-jvm5n" [cefa37b2-672e-44fb-becb-45d0eb360bf7] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-jvm5n" [cefa37b2-672e-44fb-becb-45d0eb360bf7] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 13.029716383s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (13.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-364725 image load --daemon gcr.io/google-containers/addon-resizer:functional-364725 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-364725 image load --daemon gcr.io/google-containers/addon-resizer:functional-364725 --alsologtostderr: (4.008889663s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-364725 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-364725 image load --daemon gcr.io/google-containers/addon-resizer:functional-364725 --alsologtostderr
E1006 00:52:29.120716   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/addons-672690/client.crt: no such file or directory
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-364725 image load --daemon gcr.io/google-containers/addon-resizer:functional-364725 --alsologtostderr: (2.32217677s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-364725 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.170392113s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-364725
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-364725 image load --daemon gcr.io/google-containers/addon-resizer:functional-364725 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-364725 image load --daemon gcr.io/google-containers/addon-resizer:functional-364725 --alsologtostderr: (3.685519461s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-364725 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-364725 image save gcr.io/google-containers/addon-resizer:functional-364725 /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-364725 image save gcr.io/google-containers/addon-resizer:functional-364725 /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr: (2.225642606s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-364725 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-364725 service list -o json
functional_test.go:1493: Took "333.761487ms" to run "out/minikube-linux-amd64 -p functional-364725 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-364725 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.39.32:32730
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-364725 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-364725 image rm gcr.io/google-containers/addon-resizer:functional-364725 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-364725 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-364725 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.39.32:32730
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-364725 image load /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-364725 image load /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr: (2.348734263s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-364725 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-364725
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-364725 image save --daemon gcr.io/google-containers/addon-resizer:functional-364725 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-364725 image save --daemon gcr.io/google-containers/addon-resizer:functional-364725 --alsologtostderr: (2.004241745s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-364725
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.04s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "258.736808ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "71.275963ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "274.987996ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "67.835218ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (33.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-364725 /tmp/TestFunctionalparallelMountCmdany-port2663790215/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1696553565062615499" to /tmp/TestFunctionalparallelMountCmdany-port2663790215/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1696553565062615499" to /tmp/TestFunctionalparallelMountCmdany-port2663790215/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1696553565062615499" to /tmp/TestFunctionalparallelMountCmdany-port2663790215/001/test-1696553565062615499
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-364725 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-364725 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (233.809707ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-364725 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-364725 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  6 00:52 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  6 00:52 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  6 00:52 test-1696553565062615499
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-364725 ssh cat /mount-9p/test-1696553565062615499
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-364725 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [285e5bc4-9f97-4d1c-bf05-a02ca4e27b03] Pending
helpers_test.go:344: "busybox-mount" [285e5bc4-9f97-4d1c-bf05-a02ca4e27b03] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
E1006 00:52:49.601403   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/addons-672690/client.crt: no such file or directory
helpers_test.go:344: "busybox-mount" [285e5bc4-9f97-4d1c-bf05-a02ca4e27b03] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [285e5bc4-9f97-4d1c-bf05-a02ca4e27b03] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 31.020517841s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-364725 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-364725 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-364725 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-364725 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-364725 /tmp/TestFunctionalparallelMountCmdany-port2663790215/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (33.66s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-364725 /tmp/TestFunctionalparallelMountCmdspecific-port947326059/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-364725 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-364725 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (218.522242ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-364725 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-364725 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-364725 /tmp/TestFunctionalparallelMountCmdspecific-port947326059/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-364725 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-364725 ssh "sudo umount -f /mount-9p": exit status 1 (225.26ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-364725 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-364725 /tmp/TestFunctionalparallelMountCmdspecific-port947326059/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.99s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-364725 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1534007010/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-364725 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1534007010/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-364725 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1534007010/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-364725 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-364725 ssh "findmnt -T" /mount1: exit status 1 (375.767682ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-364725 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-364725 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-364725 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-364725 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-364725 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1534007010/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-364725 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1534007010/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-364725 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1534007010/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.59s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-364725
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-364725
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-364725
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestGvisorAddon (308.41s)

                                                
                                                
=== RUN   TestGvisorAddon
=== PAUSE TestGvisorAddon

                                                
                                                

                                                
                                                
=== CONT  TestGvisorAddon
gvisor_addon_test.go:52: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-468102 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:52: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-468102 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (1m22.737153608s)
gvisor_addon_test.go:58: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-468102 cache add gcr.io/k8s-minikube/gvisor-addon:2
gvisor_addon_test.go:58: (dbg) Done: out/minikube-linux-amd64 -p gvisor-468102 cache add gcr.io/k8s-minikube/gvisor-addon:2: (24.758368277s)
gvisor_addon_test.go:63: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-468102 addons enable gvisor
gvisor_addon_test.go:63: (dbg) Done: out/minikube-linux-amd64 -p gvisor-468102 addons enable gvisor: (3.397803827s)
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [c74bcb19-a103-46a6-aa04-ed22653b612f] Running
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 5.03396937s
gvisor_addon_test.go:73: (dbg) Run:  kubectl --context gvisor-468102 replace --force -f testdata/nginx-gvisor.yaml
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [a3667ba6-1747-4b99-9461-896e8a8050f1] Pending
helpers_test.go:344: "nginx-gvisor" [a3667ba6-1747-4b99-9461-896e8a8050f1] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-gvisor" [a3667ba6-1747-4b99-9461-896e8a8050f1] Running
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 13.037403833s
gvisor_addon_test.go:83: (dbg) Run:  out/minikube-linux-amd64 stop -p gvisor-468102
gvisor_addon_test.go:83: (dbg) Done: out/minikube-linux-amd64 stop -p gvisor-468102: (1m32.432052378s)
gvisor_addon_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-468102 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
E1006 01:24:50.197914   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/skaffold-246658/client.crt: no such file or directory
E1006 01:25:11.685057   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/addons-672690/client.crt: no such file or directory
gvisor_addon_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-468102 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (1m15.38860887s)
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [c74bcb19-a103-46a6-aa04-ed22653b612f] Running / Ready:ContainersNotReady (containers with unready status: [gvisor]) / ContainersReady:ContainersNotReady (containers with unready status: [gvisor])
helpers_test.go:344: "gvisor" [c74bcb19-a103-46a6-aa04-ed22653b612f] Running
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 5.044789721s
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [a3667ba6-1747-4b99-9461-896e8a8050f1] Running / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 5.011181938s
helpers_test.go:175: Cleaning up "gvisor-468102" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p gvisor-468102
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p gvisor-468102: (1.245260355s)
--- PASS: TestGvisorAddon (308.41s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (48.49s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-365669 --driver=kvm2 
E1006 00:53:30.562121   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/addons-672690/client.crt: no such file or directory
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-365669 --driver=kvm2 : (48.489846038s)
--- PASS: TestImageBuild/serial/Setup (48.49s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.5s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-365669
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-365669: (1.495554467s)
--- PASS: TestImageBuild/serial/NormalBuild (1.50s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.25s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-365669
image_test.go:99: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-365669: (1.250746316s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.25s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.39s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-365669
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.39s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.3s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-365669
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.30s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (77.77s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-376308 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2 
E1006 00:54:52.482374   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/addons-672690/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-376308 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2 : (1m17.770743945s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (77.77s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (17.46s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-376308 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-376308 addons enable ingress --alsologtostderr -v=5: (17.461119991s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (17.46s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.61s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-376308 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.61s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (47.34s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:206: (dbg) Run:  kubectl --context ingress-addon-legacy-376308 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:206: (dbg) Done: kubectl --context ingress-addon-legacy-376308 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (16.900555582s)
addons_test.go:231: (dbg) Run:  kubectl --context ingress-addon-legacy-376308 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context ingress-addon-legacy-376308 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [e09fe1cf-35f9-4678-a3ee-53f516fd6f4c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [e09fe1cf-35f9-4678-a3ee-53f516fd6f4c] Running
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 11.015651068s
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-376308 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:285: (dbg) Run:  kubectl --context ingress-addon-legacy-376308 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-376308 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.39.145
addons_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-376308 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-376308 addons disable ingress-dns --alsologtostderr -v=1: (10.729996349s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-376308 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-376308 addons disable ingress --alsologtostderr -v=1: (7.508405227s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (47.34s)

                                                
                                    
x
+
TestJSONOutput/start/Command (67.03s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-193750 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 
E1006 00:57:08.637007   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/addons-672690/client.crt: no such file or directory
E1006 00:57:24.104465   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/functional-364725/client.crt: no such file or directory
E1006 00:57:24.109768   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/functional-364725/client.crt: no such file or directory
E1006 00:57:24.120079   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/functional-364725/client.crt: no such file or directory
E1006 00:57:24.140440   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/functional-364725/client.crt: no such file or directory
E1006 00:57:24.180828   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/functional-364725/client.crt: no such file or directory
E1006 00:57:24.261207   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/functional-364725/client.crt: no such file or directory
E1006 00:57:24.421644   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/functional-364725/client.crt: no such file or directory
E1006 00:57:24.742427   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/functional-364725/client.crt: no such file or directory
E1006 00:57:25.383377   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/functional-364725/client.crt: no such file or directory
E1006 00:57:26.663950   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/functional-364725/client.crt: no such file or directory
E1006 00:57:29.225091   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/functional-364725/client.crt: no such file or directory
E1006 00:57:34.345340   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/functional-364725/client.crt: no such file or directory
E1006 00:57:36.322717   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/addons-672690/client.crt: no such file or directory
E1006 00:57:44.586545   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/functional-364725/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-193750 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 : (1m7.028633928s)
--- PASS: TestJSONOutput/start/Command (67.03s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.57s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-193750 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.57s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.56s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-193750 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.56s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (13.12s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-193750 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-193750 --output=json --user=testUser: (13.115585221s)
--- PASS: TestJSONOutput/stop/Command (13.12s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-410772 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-410772 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (80.894901ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ddf67b64-4933-456a-a250-416cb3c976a3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-410772] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"42a927d7-680d-4028-9be7-06ce76b56165","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17314"}}
	{"specversion":"1.0","id":"1c1c6634-ca54-43a0-8c8b-4d295744b50b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"461cee7c-1ca1-4392-adcd-eab648670ddc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17314-68418/kubeconfig"}}
	{"specversion":"1.0","id":"abe81374-9039-4803-a31e-33b77e793d98","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17314-68418/.minikube"}}
	{"specversion":"1.0","id":"1a25779e-4847-432d-8f0a-ef4f7193e849","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"0d01c199-2bb8-4e90-8a6d-01168c910de5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"60a4f9d9-9e9a-4cb1-b89e-aa5d61f14337","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-410772" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-410772
E1006 00:58:05.066711   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/functional-364725/client.crt: no such file or directory
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (109.53s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-924138 --driver=kvm2 
E1006 00:58:46.026998   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/functional-364725/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-924138 --driver=kvm2 : (54.206895924s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-927317 --driver=kvm2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-927317 --driver=kvm2 : (52.814115911s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-924138
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-927317
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-927317" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-927317
helpers_test.go:175: Cleaning up "first-924138" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-924138
--- PASS: TestMinikubeProfile (109.53s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (28.7s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-023669 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 
E1006 01:00:07.949684   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/functional-364725/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-023669 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 : (27.704442881s)
--- PASS: TestMountStart/serial/StartWithMountFirst (28.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-023669 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-023669 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.41s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (29.19s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-039907 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-039907 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 : (28.185662385s)
--- PASS: TestMountStart/serial/StartWithMountSecond (29.19s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.43s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-039907 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-039907 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.43s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.71s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-023669 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-039907 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-039907 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.42s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.1s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-039907
E1006 01:00:54.732861   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/ingress-addon-legacy-376308/client.crt: no such file or directory
E1006 01:00:54.738134   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/ingress-addon-legacy-376308/client.crt: no such file or directory
E1006 01:00:54.748460   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/ingress-addon-legacy-376308/client.crt: no such file or directory
E1006 01:00:54.768743   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/ingress-addon-legacy-376308/client.crt: no such file or directory
E1006 01:00:54.809057   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/ingress-addon-legacy-376308/client.crt: no such file or directory
E1006 01:00:54.889519   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/ingress-addon-legacy-376308/client.crt: no such file or directory
E1006 01:00:55.049970   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/ingress-addon-legacy-376308/client.crt: no such file or directory
E1006 01:00:55.370581   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/ingress-addon-legacy-376308/client.crt: no such file or directory
E1006 01:00:56.011607   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/ingress-addon-legacy-376308/client.crt: no such file or directory
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-039907: (2.098300694s)
--- PASS: TestMountStart/serial/Stop (2.10s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.08s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-039907
E1006 01:00:57.292443   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/ingress-addon-legacy-376308/client.crt: no such file or directory
E1006 01:00:59.853245   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/ingress-addon-legacy-376308/client.crt: no such file or directory
E1006 01:01:04.973940   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/ingress-addon-legacy-376308/client.crt: no such file or directory
E1006 01:01:15.214275   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/ingress-addon-legacy-376308/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-039907: (22.08130479s)
--- PASS: TestMountStart/serial/RestartStopped (23.08s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-039907 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-039907 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.42s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (126.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-571584 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 
E1006 01:01:35.695229   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/ingress-addon-legacy-376308/client.crt: no such file or directory
E1006 01:02:08.637029   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/addons-672690/client.crt: no such file or directory
E1006 01:02:16.655801   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/ingress-addon-legacy-376308/client.crt: no such file or directory
E1006 01:02:24.103986   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/functional-364725/client.crt: no such file or directory
E1006 01:02:51.790386   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/functional-364725/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p multinode-571584 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 : (2m6.010995179s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571584 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (126.43s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-571584 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-571584 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-571584 -- rollout status deployment/busybox: (3.903300179s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-571584 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-571584 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-571584 -- exec busybox-5bc68d56bd-5vw5w -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-571584 -- exec busybox-5bc68d56bd-nk4ln -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-571584 -- exec busybox-5bc68d56bd-5vw5w -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-571584 -- exec busybox-5bc68d56bd-nk4ln -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-571584 -- exec busybox-5bc68d56bd-5vw5w -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-571584 -- exec busybox-5bc68d56bd-nk4ln -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.86s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-571584 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-571584 -- exec busybox-5bc68d56bd-5vw5w -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-571584 -- exec busybox-5bc68d56bd-5vw5w -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-571584 -- exec busybox-5bc68d56bd-nk4ln -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-571584 -- exec busybox-5bc68d56bd-nk4ln -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.98s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (47.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-571584 -v 3 --alsologtostderr
E1006 01:03:38.576612   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/ingress-addon-legacy-376308/client.crt: no such file or directory
multinode_test.go:110: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-571584 -v 3 --alsologtostderr: (47.386608746s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571584 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (47.99s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.22s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571584 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571584 cp testdata/cp-test.txt multinode-571584:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571584 ssh -n multinode-571584 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571584 cp multinode-571584:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2612152400/001/cp-test_multinode-571584.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571584 ssh -n multinode-571584 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571584 cp multinode-571584:/home/docker/cp-test.txt multinode-571584-m02:/home/docker/cp-test_multinode-571584_multinode-571584-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571584 ssh -n multinode-571584 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571584 ssh -n multinode-571584-m02 "sudo cat /home/docker/cp-test_multinode-571584_multinode-571584-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571584 cp multinode-571584:/home/docker/cp-test.txt multinode-571584-m03:/home/docker/cp-test_multinode-571584_multinode-571584-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571584 ssh -n multinode-571584 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571584 ssh -n multinode-571584-m03 "sudo cat /home/docker/cp-test_multinode-571584_multinode-571584-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571584 cp testdata/cp-test.txt multinode-571584-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571584 ssh -n multinode-571584-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571584 cp multinode-571584-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2612152400/001/cp-test_multinode-571584-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571584 ssh -n multinode-571584-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571584 cp multinode-571584-m02:/home/docker/cp-test.txt multinode-571584:/home/docker/cp-test_multinode-571584-m02_multinode-571584.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571584 ssh -n multinode-571584-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571584 ssh -n multinode-571584 "sudo cat /home/docker/cp-test_multinode-571584-m02_multinode-571584.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571584 cp multinode-571584-m02:/home/docker/cp-test.txt multinode-571584-m03:/home/docker/cp-test_multinode-571584-m02_multinode-571584-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571584 ssh -n multinode-571584-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571584 ssh -n multinode-571584-m03 "sudo cat /home/docker/cp-test_multinode-571584-m02_multinode-571584-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571584 cp testdata/cp-test.txt multinode-571584-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571584 ssh -n multinode-571584-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571584 cp multinode-571584-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2612152400/001/cp-test_multinode-571584-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571584 ssh -n multinode-571584-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571584 cp multinode-571584-m03:/home/docker/cp-test.txt multinode-571584:/home/docker/cp-test_multinode-571584-m03_multinode-571584.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571584 ssh -n multinode-571584-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571584 ssh -n multinode-571584 "sudo cat /home/docker/cp-test_multinode-571584-m03_multinode-571584.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571584 cp multinode-571584-m03:/home/docker/cp-test.txt multinode-571584-m02:/home/docker/cp-test_multinode-571584-m03_multinode-571584-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571584 ssh -n multinode-571584-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571584 ssh -n multinode-571584-m02 "sudo cat /home/docker/cp-test_multinode-571584-m03_multinode-571584-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.70s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571584 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-amd64 -p multinode-571584 node stop m03: (3.100988653s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571584 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-571584 status: exit status 7 (452.004606ms)

                                                
                                                
-- stdout --
	multinode-571584
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-571584-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-571584-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571584 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-571584 status --alsologtostderr: exit status 7 (440.994626ms)

                                                
                                                
-- stdout --
	multinode-571584
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-571584-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-571584-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 01:04:33.788952   90159 out.go:296] Setting OutFile to fd 1 ...
	I1006 01:04:33.789225   90159 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1006 01:04:33.789235   90159 out.go:309] Setting ErrFile to fd 2...
	I1006 01:04:33.789243   90159 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1006 01:04:33.789476   90159 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17314-68418/.minikube/bin
	I1006 01:04:33.789660   90159 out.go:303] Setting JSON to false
	I1006 01:04:33.789704   90159 mustload.go:65] Loading cluster: multinode-571584
	I1006 01:04:33.789814   90159 notify.go:220] Checking for updates...
	I1006 01:04:33.790118   90159 config.go:182] Loaded profile config "multinode-571584": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1006 01:04:33.790136   90159 status.go:255] checking status of multinode-571584 ...
	I1006 01:04:33.790553   90159 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1006 01:04:33.790624   90159 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 01:04:33.811984   90159 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33747
	I1006 01:04:33.812455   90159 main.go:141] libmachine: () Calling .GetVersion
	I1006 01:04:33.813050   90159 main.go:141] libmachine: Using API Version  1
	I1006 01:04:33.813077   90159 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 01:04:33.813445   90159 main.go:141] libmachine: () Calling .GetMachineName
	I1006 01:04:33.813647   90159 main.go:141] libmachine: (multinode-571584) Calling .GetState
	I1006 01:04:33.815514   90159 status.go:330] multinode-571584 host status = "Running" (err=<nil>)
	I1006 01:04:33.815537   90159 host.go:66] Checking if "multinode-571584" exists ...
	I1006 01:04:33.815831   90159 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1006 01:04:33.815866   90159 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 01:04:33.830715   90159 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41527
	I1006 01:04:33.831117   90159 main.go:141] libmachine: () Calling .GetVersion
	I1006 01:04:33.831558   90159 main.go:141] libmachine: Using API Version  1
	I1006 01:04:33.831595   90159 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 01:04:33.831961   90159 main.go:141] libmachine: () Calling .GetMachineName
	I1006 01:04:33.832142   90159 main.go:141] libmachine: (multinode-571584) Calling .GetIP
	I1006 01:04:33.835009   90159 main.go:141] libmachine: (multinode-571584) DBG | domain multinode-571584 has defined MAC address 52:54:00:55:98:d6 in network mk-multinode-571584
	I1006 01:04:33.835392   90159 main.go:141] libmachine: (multinode-571584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:98:d6", ip: ""} in network mk-multinode-571584: {Iface:virbr1 ExpiryTime:2023-10-06 02:01:35 +0000 UTC Type:0 Mac:52:54:00:55:98:d6 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-571584 Clientid:01:52:54:00:55:98:d6}
	I1006 01:04:33.835425   90159 main.go:141] libmachine: (multinode-571584) DBG | domain multinode-571584 has defined IP address 192.168.39.165 and MAC address 52:54:00:55:98:d6 in network mk-multinode-571584
	I1006 01:04:33.835763   90159 host.go:66] Checking if "multinode-571584" exists ...
	I1006 01:04:33.836064   90159 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1006 01:04:33.836104   90159 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 01:04:33.851295   90159 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44223
	I1006 01:04:33.851693   90159 main.go:141] libmachine: () Calling .GetVersion
	I1006 01:04:33.852157   90159 main.go:141] libmachine: Using API Version  1
	I1006 01:04:33.852178   90159 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 01:04:33.852587   90159 main.go:141] libmachine: () Calling .GetMachineName
	I1006 01:04:33.852787   90159 main.go:141] libmachine: (multinode-571584) Calling .DriverName
	I1006 01:04:33.853006   90159 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 01:04:33.853030   90159 main.go:141] libmachine: (multinode-571584) Calling .GetSSHHostname
	I1006 01:04:33.855852   90159 main.go:141] libmachine: (multinode-571584) DBG | domain multinode-571584 has defined MAC address 52:54:00:55:98:d6 in network mk-multinode-571584
	I1006 01:04:33.856331   90159 main.go:141] libmachine: (multinode-571584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:98:d6", ip: ""} in network mk-multinode-571584: {Iface:virbr1 ExpiryTime:2023-10-06 02:01:35 +0000 UTC Type:0 Mac:52:54:00:55:98:d6 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-571584 Clientid:01:52:54:00:55:98:d6}
	I1006 01:04:33.856373   90159 main.go:141] libmachine: (multinode-571584) DBG | domain multinode-571584 has defined IP address 192.168.39.165 and MAC address 52:54:00:55:98:d6 in network mk-multinode-571584
	I1006 01:04:33.856476   90159 main.go:141] libmachine: (multinode-571584) Calling .GetSSHPort
	I1006 01:04:33.856631   90159 main.go:141] libmachine: (multinode-571584) Calling .GetSSHKeyPath
	I1006 01:04:33.856782   90159 main.go:141] libmachine: (multinode-571584) Calling .GetSSHUsername
	I1006 01:04:33.856895   90159 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17314-68418/.minikube/machines/multinode-571584/id_rsa Username:docker}
	I1006 01:04:33.937922   90159 ssh_runner.go:195] Run: systemctl --version
	I1006 01:04:33.945082   90159 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 01:04:33.959165   90159 kubeconfig.go:92] found "multinode-571584" server: "https://192.168.39.165:8443"
	I1006 01:04:33.959194   90159 api_server.go:166] Checking apiserver status ...
	I1006 01:04:33.959232   90159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 01:04:33.971828   90159 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1944/cgroup
	I1006 01:04:33.980234   90159 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/podb9f43dfb25a6aec176ea612d887557f3/657942dca812583339e5771515390850cab0b6c961d4afe17f1f863aac6d55c7"
	I1006 01:04:33.980301   90159 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podb9f43dfb25a6aec176ea612d887557f3/657942dca812583339e5771515390850cab0b6c961d4afe17f1f863aac6d55c7/freezer.state
	I1006 01:04:33.989311   90159 api_server.go:204] freezer state: "THAWED"
	I1006 01:04:33.989336   90159 api_server.go:253] Checking apiserver healthz at https://192.168.39.165:8443/healthz ...
	I1006 01:04:33.994061   90159 api_server.go:279] https://192.168.39.165:8443/healthz returned 200:
	ok
	I1006 01:04:33.994094   90159 status.go:421] multinode-571584 apiserver status = Running (err=<nil>)
	I1006 01:04:33.994108   90159 status.go:257] multinode-571584 status: &{Name:multinode-571584 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1006 01:04:33.994130   90159 status.go:255] checking status of multinode-571584-m02 ...
	I1006 01:04:33.994476   90159 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1006 01:04:33.994537   90159 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 01:04:34.009483   90159 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45433
	I1006 01:04:34.009908   90159 main.go:141] libmachine: () Calling .GetVersion
	I1006 01:04:34.010368   90159 main.go:141] libmachine: Using API Version  1
	I1006 01:04:34.010394   90159 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 01:04:34.010752   90159 main.go:141] libmachine: () Calling .GetMachineName
	I1006 01:04:34.010923   90159 main.go:141] libmachine: (multinode-571584-m02) Calling .GetState
	I1006 01:04:34.012367   90159 status.go:330] multinode-571584-m02 host status = "Running" (err=<nil>)
	I1006 01:04:34.012394   90159 host.go:66] Checking if "multinode-571584-m02" exists ...
	I1006 01:04:34.012664   90159 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1006 01:04:34.012715   90159 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 01:04:34.027372   90159 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36153
	I1006 01:04:34.027759   90159 main.go:141] libmachine: () Calling .GetVersion
	I1006 01:04:34.028233   90159 main.go:141] libmachine: Using API Version  1
	I1006 01:04:34.028254   90159 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 01:04:34.028615   90159 main.go:141] libmachine: () Calling .GetMachineName
	I1006 01:04:34.028827   90159 main.go:141] libmachine: (multinode-571584-m02) Calling .GetIP
	I1006 01:04:34.031739   90159 main.go:141] libmachine: (multinode-571584-m02) DBG | domain multinode-571584-m02 has defined MAC address 52:54:00:a2:56:86 in network mk-multinode-571584
	I1006 01:04:34.032145   90159 main.go:141] libmachine: (multinode-571584-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:56:86", ip: ""} in network mk-multinode-571584: {Iface:virbr1 ExpiryTime:2023-10-06 02:02:50 +0000 UTC Type:0 Mac:52:54:00:a2:56:86 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:multinode-571584-m02 Clientid:01:52:54:00:a2:56:86}
	I1006 01:04:34.032183   90159 main.go:141] libmachine: (multinode-571584-m02) DBG | domain multinode-571584-m02 has defined IP address 192.168.39.162 and MAC address 52:54:00:a2:56:86 in network mk-multinode-571584
	I1006 01:04:34.032326   90159 host.go:66] Checking if "multinode-571584-m02" exists ...
	I1006 01:04:34.032626   90159 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1006 01:04:34.032668   90159 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 01:04:34.048218   90159 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36701
	I1006 01:04:34.048635   90159 main.go:141] libmachine: () Calling .GetVersion
	I1006 01:04:34.049068   90159 main.go:141] libmachine: Using API Version  1
	I1006 01:04:34.049087   90159 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 01:04:34.049380   90159 main.go:141] libmachine: () Calling .GetMachineName
	I1006 01:04:34.049560   90159 main.go:141] libmachine: (multinode-571584-m02) Calling .DriverName
	I1006 01:04:34.049762   90159 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 01:04:34.049790   90159 main.go:141] libmachine: (multinode-571584-m02) Calling .GetSSHHostname
	I1006 01:04:34.052456   90159 main.go:141] libmachine: (multinode-571584-m02) DBG | domain multinode-571584-m02 has defined MAC address 52:54:00:a2:56:86 in network mk-multinode-571584
	I1006 01:04:34.052918   90159 main.go:141] libmachine: (multinode-571584-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:56:86", ip: ""} in network mk-multinode-571584: {Iface:virbr1 ExpiryTime:2023-10-06 02:02:50 +0000 UTC Type:0 Mac:52:54:00:a2:56:86 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:multinode-571584-m02 Clientid:01:52:54:00:a2:56:86}
	I1006 01:04:34.052947   90159 main.go:141] libmachine: (multinode-571584-m02) DBG | domain multinode-571584-m02 has defined IP address 192.168.39.162 and MAC address 52:54:00:a2:56:86 in network mk-multinode-571584
	I1006 01:04:34.053071   90159 main.go:141] libmachine: (multinode-571584-m02) Calling .GetSSHPort
	I1006 01:04:34.053241   90159 main.go:141] libmachine: (multinode-571584-m02) Calling .GetSSHKeyPath
	I1006 01:04:34.053390   90159 main.go:141] libmachine: (multinode-571584-m02) Calling .GetSSHUsername
	I1006 01:04:34.053528   90159 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17314-68418/.minikube/machines/multinode-571584-m02/id_rsa Username:docker}
	I1006 01:04:34.138424   90159 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 01:04:34.150547   90159 status.go:257] multinode-571584-m02 status: &{Name:multinode-571584-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1006 01:04:34.150604   90159 status.go:255] checking status of multinode-571584-m03 ...
	I1006 01:04:34.151039   90159 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1006 01:04:34.151090   90159 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 01:04:34.166061   90159 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33059
	I1006 01:04:34.166534   90159 main.go:141] libmachine: () Calling .GetVersion
	I1006 01:04:34.167066   90159 main.go:141] libmachine: Using API Version  1
	I1006 01:04:34.167096   90159 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 01:04:34.167436   90159 main.go:141] libmachine: () Calling .GetMachineName
	I1006 01:04:34.167654   90159 main.go:141] libmachine: (multinode-571584-m03) Calling .GetState
	I1006 01:04:34.169403   90159 status.go:330] multinode-571584-m03 host status = "Stopped" (err=<nil>)
	I1006 01:04:34.169419   90159 status.go:343] host is not running, skipping remaining checks
	I1006 01:04:34.169424   90159 status.go:257] multinode-571584-m03 status: &{Name:multinode-571584-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.99s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (31.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571584 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-linux-amd64 -p multinode-571584 node start m03 --alsologtostderr: (30.512538925s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571584 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (31.18s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (179.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-571584
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-571584
multinode_test.go:290: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-571584: (28.46457885s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-571584 --wait=true -v=8 --alsologtostderr
E1006 01:05:54.731973   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/ingress-addon-legacy-376308/client.crt: no such file or directory
E1006 01:06:22.417017   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/ingress-addon-legacy-376308/client.crt: no such file or directory
E1006 01:07:08.637003   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/addons-672690/client.crt: no such file or directory
E1006 01:07:24.103782   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/functional-364725/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 start -p multinode-571584 --wait=true -v=8 --alsologtostderr: (2m30.984467332s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-571584
--- PASS: TestMultiNode/serial/RestartKeepsNodes (179.57s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571584 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-amd64 -p multinode-571584 node delete m03: (1.196332378s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571584 status --alsologtostderr
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.75s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (25.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571584 stop
E1006 01:08:31.683696   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/addons-672690/client.crt: no such file or directory
multinode_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p multinode-571584 stop: (25.370748787s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571584 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-571584 status: exit status 7 (99.924854ms)

                                                
                                                
-- stdout --
	multinode-571584
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-571584-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571584 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-571584 status --alsologtostderr: exit status 7 (99.214119ms)

                                                
                                                
-- stdout --
	multinode-571584
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-571584-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 01:08:32.203206   91579 out.go:296] Setting OutFile to fd 1 ...
	I1006 01:08:32.203355   91579 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1006 01:08:32.203368   91579 out.go:309] Setting ErrFile to fd 2...
	I1006 01:08:32.203376   91579 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1006 01:08:32.203582   91579 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17314-68418/.minikube/bin
	I1006 01:08:32.203748   91579 out.go:303] Setting JSON to false
	I1006 01:08:32.203805   91579 mustload.go:65] Loading cluster: multinode-571584
	I1006 01:08:32.203923   91579 notify.go:220] Checking for updates...
	I1006 01:08:32.204372   91579 config.go:182] Loaded profile config "multinode-571584": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1006 01:08:32.204395   91579 status.go:255] checking status of multinode-571584 ...
	I1006 01:08:32.204847   91579 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1006 01:08:32.204920   91579 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 01:08:32.222543   91579 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44077
	I1006 01:08:32.222986   91579 main.go:141] libmachine: () Calling .GetVersion
	I1006 01:08:32.223534   91579 main.go:141] libmachine: Using API Version  1
	I1006 01:08:32.223557   91579 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 01:08:32.223978   91579 main.go:141] libmachine: () Calling .GetMachineName
	I1006 01:08:32.224147   91579 main.go:141] libmachine: (multinode-571584) Calling .GetState
	I1006 01:08:32.225890   91579 status.go:330] multinode-571584 host status = "Stopped" (err=<nil>)
	I1006 01:08:32.225904   91579 status.go:343] host is not running, skipping remaining checks
	I1006 01:08:32.225909   91579 status.go:257] multinode-571584 status: &{Name:multinode-571584 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1006 01:08:32.225931   91579 status.go:255] checking status of multinode-571584-m02 ...
	I1006 01:08:32.226229   91579 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1006 01:08:32.226264   91579 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 01:08:32.240137   91579 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44323
	I1006 01:08:32.240499   91579 main.go:141] libmachine: () Calling .GetVersion
	I1006 01:08:32.240941   91579 main.go:141] libmachine: Using API Version  1
	I1006 01:08:32.240962   91579 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 01:08:32.241281   91579 main.go:141] libmachine: () Calling .GetMachineName
	I1006 01:08:32.241449   91579 main.go:141] libmachine: (multinode-571584-m02) Calling .GetState
	I1006 01:08:32.242745   91579 status.go:330] multinode-571584-m02 host status = "Stopped" (err=<nil>)
	I1006 01:08:32.242761   91579 status.go:343] host is not running, skipping remaining checks
	I1006 01:08:32.242768   91579 status.go:257] multinode-571584-m02 status: &{Name:multinode-571584-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (25.57s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (132.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:354: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-571584 --wait=true -v=8 --alsologtostderr --driver=kvm2 
multinode_test.go:354: (dbg) Done: out/minikube-linux-amd64 start -p multinode-571584 --wait=true -v=8 --alsologtostderr --driver=kvm2 : (2m12.067626183s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-amd64 -p multinode-571584 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (132.62s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (52.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-571584
multinode_test.go:452: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-571584-m02 --driver=kvm2 
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-571584-m02 --driver=kvm2 : exit status 14 (79.249154ms)

                                                
                                                
-- stdout --
	* [multinode-571584-m02] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17314
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17314-68418/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17314-68418/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-571584-m02' is duplicated with machine name 'multinode-571584-m02' in profile 'multinode-571584'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-571584-m03 --driver=kvm2 
E1006 01:10:54.732506   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/ingress-addon-legacy-376308/client.crt: no such file or directory
multinode_test.go:460: (dbg) Done: out/minikube-linux-amd64 start -p multinode-571584-m03 --driver=kvm2 : (51.449748883s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-571584
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-571584: exit status 80 (232.320926ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-571584
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-571584-m03 already exists in multinode-571584-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-571584-m03
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-571584-m03: (1.028681689s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (52.85s)

                                                
                                    
x
+
TestPreload (200.83s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-030718 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4
E1006 01:12:08.637195   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/addons-672690/client.crt: no such file or directory
E1006 01:12:24.103711   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/functional-364725/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-030718 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4: (2m2.756480634s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-030718 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-030718 image pull gcr.io/k8s-minikube/busybox: (1.324924366s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-030718
E1006 01:13:47.151141   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/functional-364725/client.crt: no such file or directory
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-030718: (13.116631116s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-030718 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-030718 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 : (1m2.35174781s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-030718 image list
helpers_test.go:175: Cleaning up "test-preload-030718" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-030718
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-030718: (1.063757769s)
--- PASS: TestPreload (200.83s)

                                                
                                    
x
+
TestScheduledStopUnix (120.73s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-039539 --memory=2048 --driver=kvm2 
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-039539 --memory=2048 --driver=kvm2 : (48.854820305s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-039539 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-039539 -n scheduled-stop-039539
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-039539 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-039539 --cancel-scheduled
E1006 01:15:54.732206   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/ingress-addon-legacy-376308/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-039539 -n scheduled-stop-039539
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-039539
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-039539 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-039539
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-039539: exit status 7 (84.820516ms)

                                                
                                                
-- stdout --
	scheduled-stop-039539
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-039539 -n scheduled-stop-039539
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-039539 -n scheduled-stop-039539: exit status 7 (83.826731ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-039539" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-039539
--- PASS: TestScheduledStopUnix (120.73s)

                                                
                                    
x
+
TestSkaffold (139.67s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe724155893 version
skaffold_test.go:63: skaffold version: v2.8.0
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-246658 --memory=2600 --driver=kvm2 
E1006 01:17:08.638325   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/addons-672690/client.crt: no such file or directory
E1006 01:17:17.778932   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/ingress-addon-legacy-376308/client.crt: no such file or directory
E1006 01:17:24.104328   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/functional-364725/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-246658 --memory=2600 --driver=kvm2 : (52.124024184s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/KVM_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe724155893 run --minikube-profile skaffold-246658 --kube-context skaffold-246658 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe724155893 run --minikube-profile skaffold-246658 --kube-context skaffold-246658 --status-check=true --port-forward=false --interactive=false: (1m15.662155371s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-7b85b58797-2gg45" [e9d81148-a4fc-49ab-abe4-a39502a5073f] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 5.017150305s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-648b4b7d9d-g94jm" [6169d0ea-3c62-4db9-b5c3-3c1126a052ae] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.011061861s
helpers_test.go:175: Cleaning up "skaffold-246658" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-246658
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-246658: (1.206142533s)
--- PASS: TestSkaffold (139.67s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (154.76s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.6.2.3404317501.exe start -p running-upgrade-683893 --memory=2200 --vm-driver=kvm2 
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.6.2.3404317501.exe start -p running-upgrade-683893 --memory=2200 --vm-driver=kvm2 : (1m37.13643543s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-683893 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:143: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-683893 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (55.905326082s)
helpers_test.go:175: Cleaning up "running-upgrade-683893" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-683893
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-683893: (1.363705212s)
--- PASS: TestRunningBinaryUpgrade (154.76s)

                                                
                                    
x
+
TestKubernetesUpgrade (264.04s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-953787 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-953787 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2 : (2m3.210510734s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-953787
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-953787: (13.292503362s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-953787 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-953787 status --format={{.Host}}: exit status 7 (138.900131ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-953787 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-953787 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=kvm2 : (48.038579102s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-953787 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-953787 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2 
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-953787 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2 : exit status 106 (112.435004ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-953787] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17314
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17314-68418/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17314-68418/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.28.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-953787
	    minikube start -p kubernetes-upgrade-953787 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9537872 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.28.2, by running:
	    
	    minikube start -p kubernetes-upgrade-953787 --kubernetes-version=v1.28.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-953787 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-953787 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=kvm2 : (1m18.194334367s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-953787" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-953787
--- PASS: TestKubernetesUpgrade (264.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.5s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.50s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (202.12s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.6.2.1570665028.exe start -p stopped-upgrade-761641 --memory=2200 --vm-driver=kvm2 
E1006 01:20:54.732105   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/ingress-addon-legacy-376308/client.crt: no such file or directory
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.6.2.1570665028.exe start -p stopped-upgrade-761641 --memory=2200 --vm-driver=kvm2 : (1m46.567832465s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.6.2.1570665028.exe -p stopped-upgrade-761641 stop
E1006 01:22:08.636374   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/addons-672690/client.crt: no such file or directory
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.6.2.1570665028.exe -p stopped-upgrade-761641 stop: (14.092412447s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-761641 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
E1006 01:22:24.104153   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/functional-364725/client.crt: no such file or directory
version_upgrade_test.go:211: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-761641 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (1m21.457671793s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (202.12s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.3s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-761641
version_upgrade_test.go:219: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-761641: (1.298914615s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.30s)

                                                
                                    
x
+
TestPause/serial/Start (93.62s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-190235 --memory=2048 --install-addons=false --wait=all --driver=kvm2 
E1006 01:24:09.234472   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/skaffold-246658/client.crt: no such file or directory
E1006 01:24:09.239812   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/skaffold-246658/client.crt: no such file or directory
E1006 01:24:09.250145   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/skaffold-246658/client.crt: no such file or directory
E1006 01:24:09.271283   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/skaffold-246658/client.crt: no such file or directory
E1006 01:24:09.311896   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/skaffold-246658/client.crt: no such file or directory
E1006 01:24:09.392201   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/skaffold-246658/client.crt: no such file or directory
E1006 01:24:09.552369   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/skaffold-246658/client.crt: no such file or directory
E1006 01:24:09.873183   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/skaffold-246658/client.crt: no such file or directory
E1006 01:24:10.514130   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/skaffold-246658/client.crt: no such file or directory
E1006 01:24:11.794824   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/skaffold-246658/client.crt: no such file or directory
E1006 01:24:14.355762   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/skaffold-246658/client.crt: no such file or directory
E1006 01:24:19.476674   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/skaffold-246658/client.crt: no such file or directory
E1006 01:24:29.717066   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/skaffold-246658/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-190235 --memory=2048 --install-addons=false --wait=all --driver=kvm2 : (1m33.62016127s)
--- PASS: TestPause/serial/Start (93.62s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (56s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-190235 --alsologtostderr -v=1 --driver=kvm2 
E1006 01:25:31.158474   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/skaffold-246658/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-190235 --alsologtostderr -v=1 --driver=kvm2 : (55.970165858s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (56.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-124473 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-124473 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 : exit status 14 (81.907501ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-124473] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17314
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17314-68418/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17314-68418/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (62.5s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-124473 --driver=kvm2 
E1006 01:25:54.732488   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/ingress-addon-legacy-376308/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-124473 --driver=kvm2 : (1m2.181716858s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-124473 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (62.50s)

                                                
                                    
x
+
TestPause/serial/Pause (0.89s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-190235 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.89s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.34s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-190235 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-190235 --output=json --layout=cluster: exit status 2 (340.382709ms)

                                                
                                                
-- stdout --
	{"Name":"pause-190235","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-190235","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.34s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.68s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-190235 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.68s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.84s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-190235 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.84s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.26s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-190235 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-190235 --alsologtostderr -v=5: (1.26071085s)
--- PASS: TestPause/serial/DeletePaused (1.26s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (16.52s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (16.519093747s)
--- PASS: TestPause/serial/VerifyDeletedResources (16.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (77.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-845522 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-845522 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 : (1m17.574012791s)
--- PASS: TestNetworkPlugins/group/auto/Start (77.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (109.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-845522 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-845522 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 : (1m49.3712797s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (109.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (61.51s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-124473 --no-kubernetes --driver=kvm2 
E1006 01:26:53.078676   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/skaffold-246658/client.crt: no such file or directory
E1006 01:27:08.636515   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/addons-672690/client.crt: no such file or directory
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-124473 --no-kubernetes --driver=kvm2 : (59.871451763s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-124473 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-124473 status -o json: exit status 2 (340.323137ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-124473","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-124473
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-124473: (1.299139084s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (61.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (115.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-845522 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 
E1006 01:27:24.104400   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/functional-364725/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-845522 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 : (1m55.710063842s)
--- PASS: TestNetworkPlugins/group/calico/Start (115.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (39.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-124473 --no-kubernetes --driver=kvm2 
E1006 01:27:54.786563   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/gvisor-468102/client.crt: no such file or directory
E1006 01:27:54.791828   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/gvisor-468102/client.crt: no such file or directory
E1006 01:27:54.802166   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/gvisor-468102/client.crt: no such file or directory
E1006 01:27:54.822539   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/gvisor-468102/client.crt: no such file or directory
E1006 01:27:54.862871   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/gvisor-468102/client.crt: no such file or directory
E1006 01:27:54.943228   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/gvisor-468102/client.crt: no such file or directory
E1006 01:27:55.103705   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/gvisor-468102/client.crt: no such file or directory
E1006 01:27:55.424337   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/gvisor-468102/client.crt: no such file or directory
E1006 01:27:56.065400   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/gvisor-468102/client.crt: no such file or directory
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-124473 --no-kubernetes --driver=kvm2 : (39.154290223s)
--- PASS: TestNoKubernetes/serial/Start (39.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-845522 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-845522 replace --force -f testdata/netcat-deployment.yaml
E1006 01:27:57.346174   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/gvisor-468102/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-h8wkw" [20a72718-111b-472b-a4e1-f1cfb3fd1a98] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1006 01:27:59.907177   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/gvisor-468102/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-h8wkw" [20a72718-111b-472b-a4e1-f1cfb3fd1a98] Running
E1006 01:28:05.027946   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/gvisor-468102/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.011655741s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-845522 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-845522 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-845522 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (78.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-845522 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-845522 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 : (1m18.509136463s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (78.51s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-124473 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-124473 "sudo systemctl is-active --quiet service kubelet": exit status 1 (245.824058ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (115.58s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-124473
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-124473: (1m55.581446502s)
--- PASS: TestNoKubernetes/serial/Stop (115.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-bh8cx" [6b66185c-caec-46e7-9437-60ff611f3852] Running
E1006 01:28:35.748924   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/gvisor-468102/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.023771621s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-845522 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-845522 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-s6d92" [d9f46869-09c7-4f6e-91f6-9393629ea19e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-s6d92" [d9f46869-09c7-4f6e-91f6-9393629ea19e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.01297593s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-845522 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-845522 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-845522 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (75.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-845522 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 
E1006 01:29:16.709789   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/gvisor-468102/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-845522 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 : (1m15.990373988s)
--- PASS: TestNetworkPlugins/group/false/Start (75.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-568wt" [2c3f1d61-30ba-4a1e-84a6-17048a872040] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.029497403s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-845522 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-845522 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-lg4lk" [2a5650b1-5542-4b50-b1d1-cae87ab48efc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-lg4lk" [2a5650b1-5542-4b50-b1d1-cae87ab48efc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.015600549s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-845522 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-845522 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-845522 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-845522 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (15.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-845522 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-g84fk" [1f39a0bd-0c6a-480f-bbcd-c8889c84d990] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-g84fk" [1f39a0bd-0c6a-480f-bbcd-c8889c84d990] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 15.012633441s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (15.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (79.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-845522 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-845522 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 : (1m19.74358313s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (79.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-845522 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-845522 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-845522 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (91.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-845522 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-845522 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 : (1m31.412608617s)
--- PASS: TestNetworkPlugins/group/flannel/Start (91.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-845522 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (14.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-845522 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-cq5fd" [2b465fce-8371-4be8-84b5-d0c1cff86908] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1006 01:30:27.151337   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/functional-364725/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-cq5fd" [2b465fce-8371-4be8-84b5-d0c1cff86908] Running
E1006 01:30:38.630019   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/gvisor-468102/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 14.009796975s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (14.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (20.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-845522 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context false-845522 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.182922856s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context false-845522 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Done: kubectl --context false-845522 exec deployment/netcat -- nslookup kubernetes.default: (5.202168838s)
--- PASS: TestNetworkPlugins/group/false/DNS (20.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (85.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-845522 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 
E1006 01:30:54.731920   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/ingress-addon-legacy-376308/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-845522 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 : (1m25.584443273s)
--- PASS: TestNetworkPlugins/group/bridge/Start (85.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-845522 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-845522 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-845522 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-845522 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-gvgkm" [021c1794-fc89-405f-af3f-818e5eacf626] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-gvgkm" [021c1794-fc89-405f-af3f-818e5eacf626] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.016676769s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (80.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-845522 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-845522 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 : (1m20.574379947s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (80.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-845522 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-845522 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-845522 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (144.7s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-456697 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-456697 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0: (2m24.701948357s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (144.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-mrrp2" [27e72480-e495-48b8-a44c-f2319b0daffa] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.028857403s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-845522 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (14.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-845522 replace --force -f testdata/netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context flannel-845522 replace --force -f testdata/netcat-deployment.yaml: (1.469939312s)
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-4vcrj" [07684141-67fc-4388-87a0-c9ddb3966c24] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-4vcrj" [07684141-67fc-4388-87a0-c9ddb3966c24] Running
E1006 01:32:08.636908   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/addons-672690/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.022751633s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (14.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-845522 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-845522 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-845522 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-845522 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-845522 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-pfw4q" [1f23ef45-e9a9-4173-af1f-2eb32d1f6b79] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-pfw4q" [1f23ef45-e9a9-4173-af1f-2eb32d1f6b79] Running
E1006 01:32:24.103582   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/functional-364725/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.011898964s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-845522 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-845522 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-845522 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (91.7s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-009149 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.28.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-009149 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.28.2: (1m31.700506828s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (91.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-845522 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (13.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-845522 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-9tx6g" [b00db645-e33d-4424-8be9-0cfe323d7bd5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-9tx6g" [b00db645-e33d-4424-8be9-0cfe323d7bd5] Running
E1006 01:32:54.786399   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/gvisor-468102/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 13.012530119s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (13.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (120.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-150489 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-150489 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.2: (2m0.356800154s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (120.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-845522 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-845522 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-845522 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (116.59s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-987060 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.2
E1006 01:33:17.902905   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/auto-845522/client.crt: no such file or directory
E1006 01:33:22.470250   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/gvisor-468102/client.crt: no such file or directory
E1006 01:33:32.649182   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/kindnet-845522/client.crt: no such file or directory
E1006 01:33:32.654577   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/kindnet-845522/client.crt: no such file or directory
E1006 01:33:32.664961   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/kindnet-845522/client.crt: no such file or directory
E1006 01:33:32.685307   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/kindnet-845522/client.crt: no such file or directory
E1006 01:33:32.725633   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/kindnet-845522/client.crt: no such file or directory
E1006 01:33:32.806052   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/kindnet-845522/client.crt: no such file or directory
E1006 01:33:32.966536   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/kindnet-845522/client.crt: no such file or directory
E1006 01:33:33.287212   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/kindnet-845522/client.crt: no such file or directory
E1006 01:33:33.928242   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/kindnet-845522/client.crt: no such file or directory
E1006 01:33:35.208579   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/kindnet-845522/client.crt: no such file or directory
E1006 01:33:37.769098   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/kindnet-845522/client.crt: no such file or directory
E1006 01:33:38.383673   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/auto-845522/client.crt: no such file or directory
E1006 01:33:42.889665   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/kindnet-845522/client.crt: no such file or directory
E1006 01:33:53.130114   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/kindnet-845522/client.crt: no such file or directory
E1006 01:33:57.779801   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/ingress-addon-legacy-376308/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-987060 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.2: (1m56.590096165s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (116.59s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.57s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-009149 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [dd92dd40-6c07-476e-abd5-f2ecef8561a2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [dd92dd40-6c07-476e-abd5-f2ecef8561a2] Running
E1006 01:34:09.234170   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/skaffold-246658/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.024394988s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-009149 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.57s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-456697 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2818f5c3-82d9-4e5c-adba-83094ff52c1f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2818f5c3-82d9-4e5c-adba-83094ff52c1f] Running
E1006 01:34:17.405033   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/calico-845522/client.crt: no such file or directory
E1006 01:34:17.410352   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/calico-845522/client.crt: no such file or directory
E1006 01:34:17.420656   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/calico-845522/client.crt: no such file or directory
E1006 01:34:17.441070   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/calico-845522/client.crt: no such file or directory
E1006 01:34:17.481420   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/calico-845522/client.crt: no such file or directory
E1006 01:34:17.561810   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/calico-845522/client.crt: no such file or directory
E1006 01:34:17.722649   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/calico-845522/client.crt: no such file or directory
E1006 01:34:18.043791   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/calico-845522/client.crt: no such file or directory
E1006 01:34:18.684320   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/calico-845522/client.crt: no such file or directory
E1006 01:34:19.344463   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/auto-845522/client.crt: no such file or directory
E1006 01:34:19.964851   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/calico-845522/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.052827789s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-456697 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.52s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.46s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-009149 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1006 01:34:13.610719   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/kindnet-845522/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-009149 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.346334956s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-009149 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.46s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (13.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-009149 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-009149 --alsologtostderr -v=3: (13.191829552s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (13.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.83s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-456697 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1006 01:34:22.525894   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/calico-845522/client.crt: no such file or directory
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-456697 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.83s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-456697 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-456697 --alsologtostderr -v=3: (13.155154645s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-009149 -n no-preload-009149
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-009149 -n no-preload-009149: exit status 7 (83.937179ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-009149 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (333.43s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-009149 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.28.2
E1006 01:34:27.647075   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/calico-845522/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-009149 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.28.2: (5m33.064350207s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-009149 -n no-preload-009149
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (333.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-456697 -n old-k8s-version-456697
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-456697 -n old-k8s-version-456697: exit status 7 (90.29659ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-456697 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (478.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-456697 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0
E1006 01:34:37.888071   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/calico-845522/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-456697 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0: (7m57.83808242s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-456697 -n old-k8s-version-456697
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (478.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.51s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-150489 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
E1006 01:34:46.565901   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/custom-flannel-845522/client.crt: no such file or directory
helpers_test.go:344: "busybox" [7ee38ac5-5c64-46b6-b0ea-4f6ad8b7f04e] Pending
E1006 01:34:46.571167   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/custom-flannel-845522/client.crt: no such file or directory
E1006 01:34:46.581312   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/custom-flannel-845522/client.crt: no such file or directory
E1006 01:34:46.601592   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/custom-flannel-845522/client.crt: no such file or directory
E1006 01:34:46.642209   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/custom-flannel-845522/client.crt: no such file or directory
E1006 01:34:46.722561   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/custom-flannel-845522/client.crt: no such file or directory
E1006 01:34:46.883454   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/custom-flannel-845522/client.crt: no such file or directory
E1006 01:34:47.203857   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/custom-flannel-845522/client.crt: no such file or directory
helpers_test.go:344: "busybox" [7ee38ac5-5c64-46b6-b0ea-4f6ad8b7f04e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1006 01:34:47.844938   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/custom-flannel-845522/client.crt: no such file or directory
E1006 01:34:49.126166   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/custom-flannel-845522/client.crt: no such file or directory
helpers_test.go:344: "busybox" [7ee38ac5-5c64-46b6-b0ea-4f6ad8b7f04e] Running
E1006 01:34:51.687024   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/custom-flannel-845522/client.crt: no such file or directory
E1006 01:34:54.571824   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/kindnet-845522/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.02873246s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-150489 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.51s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-150489 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1006 01:34:56.807507   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/custom-flannel-845522/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-150489 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.234051624s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-150489 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (13.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-150489 --alsologtostderr -v=3
E1006 01:34:58.368943   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/calico-845522/client.crt: no such file or directory
E1006 01:35:07.047788   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/custom-flannel-845522/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-150489 --alsologtostderr -v=3: (13.163668615s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (13.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-150489 -n embed-certs-150489
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-150489 -n embed-certs-150489: exit status 7 (99.270488ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-150489 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (335.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-150489 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-150489 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.2: (5m34.866945447s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-150489 -n embed-certs-150489
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (335.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.68s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-987060 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [dbeadbe0-28ec-4b00-8572-5f682bebd6ec] Pending
helpers_test.go:344: "busybox" [dbeadbe0-28ec-4b00-8572-5f682bebd6ec] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [dbeadbe0-28ec-4b00-8572-5f682bebd6ec] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.031011438s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-987060 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.68s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-987060 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-987060 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.349555564s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-987060 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.45s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (13.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-987060 --alsologtostderr -v=3
E1006 01:35:26.521331   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/false-845522/client.crt: no such file or directory
E1006 01:35:26.526616   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/false-845522/client.crt: no such file or directory
E1006 01:35:26.536955   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/false-845522/client.crt: no such file or directory
E1006 01:35:26.557327   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/false-845522/client.crt: no such file or directory
E1006 01:35:26.597845   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/false-845522/client.crt: no such file or directory
E1006 01:35:26.678549   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/false-845522/client.crt: no such file or directory
E1006 01:35:26.839020   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/false-845522/client.crt: no such file or directory
E1006 01:35:27.159540   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/false-845522/client.crt: no such file or directory
E1006 01:35:27.528507   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/custom-flannel-845522/client.crt: no such file or directory
E1006 01:35:27.800500   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/false-845522/client.crt: no such file or directory
E1006 01:35:29.081082   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/false-845522/client.crt: no such file or directory
E1006 01:35:31.642134   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/false-845522/client.crt: no such file or directory
E1006 01:35:36.763225   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/false-845522/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-987060 --alsologtostderr -v=3: (13.173270198s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (13.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-987060 -n default-k8s-diff-port-987060
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-987060 -n default-k8s-diff-port-987060: exit status 7 (103.332516ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-987060 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (314.44s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-987060 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.2
E1006 01:35:39.329790   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/calico-845522/client.crt: no such file or directory
E1006 01:35:41.265628   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/auto-845522/client.crt: no such file or directory
E1006 01:35:47.004264   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/false-845522/client.crt: no such file or directory
E1006 01:35:54.732403   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/ingress-addon-legacy-376308/client.crt: no such file or directory
E1006 01:36:07.485450   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/false-845522/client.crt: no such file or directory
E1006 01:36:08.488717   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/custom-flannel-845522/client.crt: no such file or directory
E1006 01:36:15.873490   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/enable-default-cni-845522/client.crt: no such file or directory
E1006 01:36:15.878836   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/enable-default-cni-845522/client.crt: no such file or directory
E1006 01:36:15.889234   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/enable-default-cni-845522/client.crt: no such file or directory
E1006 01:36:15.909437   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/enable-default-cni-845522/client.crt: no such file or directory
E1006 01:36:15.949804   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/enable-default-cni-845522/client.crt: no such file or directory
E1006 01:36:16.030208   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/enable-default-cni-845522/client.crt: no such file or directory
E1006 01:36:16.190630   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/enable-default-cni-845522/client.crt: no such file or directory
E1006 01:36:16.492707   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/kindnet-845522/client.crt: no such file or directory
E1006 01:36:16.510827   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/enable-default-cni-845522/client.crt: no such file or directory
E1006 01:36:17.151198   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/enable-default-cni-845522/client.crt: no such file or directory
E1006 01:36:18.431744   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/enable-default-cni-845522/client.crt: no such file or directory
E1006 01:36:20.992023   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/enable-default-cni-845522/client.crt: no such file or directory
E1006 01:36:26.112929   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/enable-default-cni-845522/client.crt: no such file or directory
E1006 01:36:36.353669   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/enable-default-cni-845522/client.crt: no such file or directory
E1006 01:36:48.446090   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/false-845522/client.crt: no such file or directory
E1006 01:36:51.130388   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/flannel-845522/client.crt: no such file or directory
E1006 01:36:51.135938   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/flannel-845522/client.crt: no such file or directory
E1006 01:36:51.146232   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/flannel-845522/client.crt: no such file or directory
E1006 01:36:51.166606   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/flannel-845522/client.crt: no such file or directory
E1006 01:36:51.206964   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/flannel-845522/client.crt: no such file or directory
E1006 01:36:51.287340   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/flannel-845522/client.crt: no such file or directory
E1006 01:36:51.448162   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/flannel-845522/client.crt: no such file or directory
E1006 01:36:51.768800   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/flannel-845522/client.crt: no such file or directory
E1006 01:36:52.409571   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/flannel-845522/client.crt: no such file or directory
E1006 01:36:53.690059   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/flannel-845522/client.crt: no such file or directory
E1006 01:36:56.250850   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/flannel-845522/client.crt: no such file or directory
E1006 01:36:56.834536   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/enable-default-cni-845522/client.crt: no such file or directory
E1006 01:37:01.250345   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/calico-845522/client.crt: no such file or directory
E1006 01:37:01.371627   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/flannel-845522/client.crt: no such file or directory
E1006 01:37:08.636385   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/addons-672690/client.crt: no such file or directory
E1006 01:37:11.612217   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/flannel-845522/client.crt: no such file or directory
E1006 01:37:13.654059   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/bridge-845522/client.crt: no such file or directory
E1006 01:37:13.659365   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/bridge-845522/client.crt: no such file or directory
E1006 01:37:13.669649   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/bridge-845522/client.crt: no such file or directory
E1006 01:37:13.689995   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/bridge-845522/client.crt: no such file or directory
E1006 01:37:13.730333   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/bridge-845522/client.crt: no such file or directory
E1006 01:37:13.810686   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/bridge-845522/client.crt: no such file or directory
E1006 01:37:13.971127   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/bridge-845522/client.crt: no such file or directory
E1006 01:37:14.291524   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/bridge-845522/client.crt: no such file or directory
E1006 01:37:14.932367   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/bridge-845522/client.crt: no such file or directory
E1006 01:37:16.213501   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/bridge-845522/client.crt: no such file or directory
E1006 01:37:18.773953   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/bridge-845522/client.crt: no such file or directory
E1006 01:37:23.894371   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/bridge-845522/client.crt: no such file or directory
E1006 01:37:24.103748   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/functional-364725/client.crt: no such file or directory
E1006 01:37:30.409687   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/custom-flannel-845522/client.crt: no such file or directory
E1006 01:37:32.092669   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/flannel-845522/client.crt: no such file or directory
E1006 01:37:34.135006   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/bridge-845522/client.crt: no such file or directory
E1006 01:37:37.795738   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/enable-default-cni-845522/client.crt: no such file or directory
E1006 01:37:42.160690   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/kubenet-845522/client.crt: no such file or directory
E1006 01:37:42.166064   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/kubenet-845522/client.crt: no such file or directory
E1006 01:37:42.176372   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/kubenet-845522/client.crt: no such file or directory
E1006 01:37:42.196699   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/kubenet-845522/client.crt: no such file or directory
E1006 01:37:42.237107   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/kubenet-845522/client.crt: no such file or directory
E1006 01:37:42.317462   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/kubenet-845522/client.crt: no such file or directory
E1006 01:37:42.477888   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/kubenet-845522/client.crt: no such file or directory
E1006 01:37:42.798707   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/kubenet-845522/client.crt: no such file or directory
E1006 01:37:43.438907   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/kubenet-845522/client.crt: no such file or directory
E1006 01:37:44.719287   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/kubenet-845522/client.crt: no such file or directory
E1006 01:37:47.280383   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/kubenet-845522/client.crt: no such file or directory
E1006 01:37:52.400709   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/kubenet-845522/client.crt: no such file or directory
E1006 01:37:54.615772   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/bridge-845522/client.crt: no such file or directory
E1006 01:37:54.786557   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/gvisor-468102/client.crt: no such file or directory
E1006 01:37:57.421750   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/auto-845522/client.crt: no such file or directory
E1006 01:38:02.640926   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/kubenet-845522/client.crt: no such file or directory
E1006 01:38:10.366773   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/false-845522/client.crt: no such file or directory
E1006 01:38:13.053890   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/flannel-845522/client.crt: no such file or directory
E1006 01:38:23.121451   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/kubenet-845522/client.crt: no such file or directory
E1006 01:38:25.106555   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/auto-845522/client.crt: no such file or directory
E1006 01:38:32.649148   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/kindnet-845522/client.crt: no such file or directory
E1006 01:38:35.576666   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/bridge-845522/client.crt: no such file or directory
E1006 01:38:59.716569   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/enable-default-cni-845522/client.crt: no such file or directory
E1006 01:39:00.333407   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/kindnet-845522/client.crt: no such file or directory
E1006 01:39:04.081669   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/kubenet-845522/client.crt: no such file or directory
E1006 01:39:09.234418   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/skaffold-246658/client.crt: no such file or directory
E1006 01:39:17.405194   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/calico-845522/client.crt: no such file or directory
E1006 01:39:34.974942   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/flannel-845522/client.crt: no such file or directory
E1006 01:39:45.090660   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/calico-845522/client.crt: no such file or directory
E1006 01:39:46.565412   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/custom-flannel-845522/client.crt: no such file or directory
E1006 01:39:57.496889   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/bridge-845522/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-987060 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.2: (5m14.120989527s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-987060 -n default-k8s-diff-port-987060
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (314.44s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (15.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-9xk2p" [21b486aa-f07d-46db-b0e5-88ade9ccf7b8] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-9xk2p" [21b486aa-f07d-46db-b0e5-88ade9ccf7b8] Running
E1006 01:40:14.250671   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/custom-flannel-845522/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 15.033837689s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (15.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-9xk2p" [21b486aa-f07d-46db-b0e5-88ade9ccf7b8] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.016961292s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-009149 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-009149 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.84s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-009149 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-009149 -n no-preload-009149
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-009149 -n no-preload-009149: exit status 2 (307.524244ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-009149 -n no-preload-009149
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-009149 -n no-preload-009149: exit status 2 (288.331359ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-009149 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-009149 -n no-preload-009149
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-009149 -n no-preload-009149
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.84s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (75.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-516412 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.28.2
E1006 01:40:26.002566   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/kubenet-845522/client.crt: no such file or directory
E1006 01:40:26.521117   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/false-845522/client.crt: no such file or directory
E1006 01:40:32.280362   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/skaffold-246658/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-516412 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.28.2: (1m15.339371938s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (75.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (16.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-fkcmv" [e7a7e8f4-e98d-435a-aec5-9b4c0aa81db2] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-fkcmv" [e7a7e8f4-e98d-435a-aec5-9b4c0aa81db2] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 16.16924335s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (16.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-gvz4f" [6738f6ce-99ea-4b39-8ef2-831d6140dc7e] Running
E1006 01:40:54.207262   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/false-845522/client.crt: no such file or directory
E1006 01:40:54.732654   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/ingress-addon-legacy-376308/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.022983554s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-gvz4f" [6738f6ce-99ea-4b39-8ef2-831d6140dc7e] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.05679436s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-987060 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.47s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-diff-port-987060 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-fkcmv" [e7a7e8f4-e98d-435a-aec5-9b4c0aa81db2] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.014321887s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-150489 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.89s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-987060 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 pause -p default-k8s-diff-port-987060 --alsologtostderr -v=1: (1.624974541s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-987060 -n default-k8s-diff-port-987060
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-987060 -n default-k8s-diff-port-987060: exit status 2 (319.974984ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-987060 -n default-k8s-diff-port-987060
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-987060 -n default-k8s-diff-port-987060: exit status 2 (321.529308ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-987060 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-987060 -n default-k8s-diff-port-987060
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-987060 -n default-k8s-diff-port-987060
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-150489 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-150489 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-150489 -n embed-certs-150489
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-150489 -n embed-certs-150489: exit status 2 (303.928714ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-150489 -n embed-certs-150489
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-150489 -n embed-certs-150489: exit status 2 (301.230391ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-150489 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-150489 -n embed-certs-150489
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-150489 -n embed-certs-150489
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.04s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.04s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-516412 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-516412 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.040460284s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.04s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (8.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-516412 --alsologtostderr -v=3
E1006 01:41:43.557033   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/enable-default-cni-845522/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-516412 --alsologtostderr -v=3: (8.130230375s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (8.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-516412 -n newest-cni-516412
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-516412 -n newest-cni-516412: exit status 7 (82.114944ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-516412 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (47.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-516412 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.28.2
E1006 01:41:51.130777   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/flannel-845522/client.crt: no such file or directory
E1006 01:41:51.686048   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/addons-672690/client.crt: no such file or directory
E1006 01:42:08.636568   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/addons-672690/client.crt: no such file or directory
E1006 01:42:13.654004   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/bridge-845522/client.crt: no such file or directory
E1006 01:42:18.815441   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/flannel-845522/client.crt: no such file or directory
E1006 01:42:24.103979   75596 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-68418/.minikube/profiles/functional-364725/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-516412 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.28.2: (47.059251059s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-516412 -n newest-cni-516412
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (47.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-xlkzn" [cd59a10b-708e-4ae4-b425-a95fb34e6ad8] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.02382657s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-516412 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-516412 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-516412 -n newest-cni-516412
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-516412 -n newest-cni-516412: exit status 2 (257.014998ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-516412 -n newest-cni-516412
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-516412 -n newest-cni-516412: exit status 2 (252.146466ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-516412 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-516412 -n newest-cni-516412
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-516412 -n newest-cni-516412
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-xlkzn" [cd59a10b-708e-4ae4-b425-a95fb34e6ad8] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.013903923s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-456697 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-456697 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-456697 -n old-k8s-version-456697
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-456697 -n old-k8s-version-456697: exit status 2 (255.755219ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-456697 -n old-k8s-version-456697
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-456697 -n old-k8s-version-456697: exit status 2 (251.705706ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-456697 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-456697 -n old-k8s-version-456697
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-456697 -n old-k8s-version-456697
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.37s)

                                                
                                    

Test skip (31/320)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:210: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:497: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:297: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-845522 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-845522

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-845522

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-845522

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-845522

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-845522

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-845522

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-845522

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-845522

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-845522

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-845522

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-845522" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845522"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-845522" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845522"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-845522" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845522"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-845522

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-845522" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845522"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-845522" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845522"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-845522" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-845522" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-845522" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-845522" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-845522" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-845522" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-845522" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-845522" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-845522" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845522"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-845522" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845522"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-845522" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845522"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-845522" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845522"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-845522" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845522"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-845522

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-845522

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-845522" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-845522" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-845522

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-845522

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-845522" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-845522" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-845522" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-845522" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-845522" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-845522" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845522"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-845522" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845522"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-845522" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845522"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-845522" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845522"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-845522" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845522"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-845522

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-845522" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845522"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-845522" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845522"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-845522" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845522"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-845522" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845522"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-845522" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845522"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-845522" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845522"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-845522" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845522"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-845522" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845522"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-845522" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845522"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-845522" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845522"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-845522" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845522"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-845522" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845522"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-845522" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845522"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-845522" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845522"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-845522" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845522"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-845522" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845522"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-845522" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845522"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-845522" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845522"

                                                
                                                
----------------------- debugLogs end: cilium-845522 [took: 4.317795287s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-845522" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-845522
--- SKIP: TestNetworkPlugins/group/cilium (4.48s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-646676" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-646676
--- SKIP: TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                    
Copied to clipboard