Test Report: KVM_Linux 19409

                    
                      edd4f56319c0ca210375a4ae17d17ce22fec0e34:2024-08-12:35748
                    
                

Test fail (1/349)

Order failed test Duration
91 TestFunctional/serial/ComponentHealth 1.87
x
+
TestFunctional/serial/ComponentHealth (1.87s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-470148 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:833: etcd is not Ready: {Phase:Running Conditions:[{Type:PodReadyToStartContainers Status:True} {Type:Initialized Status:True} {Type:Ready Status:False} {Type:ContainersReady Status:True} {Type:PodScheduled Status:True}] Message: Reason: HostIP:192.168.39.217 PodIP:192.168.39.217 StartTime:2024-08-12 10:32:08 +0000 UTC ContainerStatuses:[{Name:etcd State:{Waiting:<nil> Running:0xc001bdd110 Terminated:<nil>} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:0xc0005e8070} Ready:true RestartCount:3 Image:registry.k8s.io/etcd:3.5.12-0 ImageID:docker-pullable://registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b ContainerID:docker://a25c22de2da6249de770ecc96c990b8b0e3386d4e869264ebf1f7cbf66a8fc12}]}
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:833: kube-apiserver is not Ready: {Phase:Running Conditions:[{Type:PodReadyToStartContainers Status:True} {Type:Initialized Status:True} {Type:Ready Status:False} {Type:ContainersReady Status:False} {Type:PodScheduled Status:True}] Message: Reason: HostIP:192.168.39.217 PodIP:192.168.39.217 StartTime:2024-08-12 10:33:32 +0000 UTC ContainerStatuses:[{Name:kube-apiserver State:{Waiting:<nil> Running:0xc001bdd170 Terminated:<nil>} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:<nil>} Ready:false RestartCount:0 Image:registry.k8s.io/kube-apiserver:v1.30.3 ImageID:docker-pullable://registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c ContainerID:docker://c8647e19fcd0be1534837d157a1e464d81c60801a1dddf73e73318fbf9a0f9dd}]}
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:833: kube-controller-manager is not Ready: {Phase:Running Conditions:[{Type:PodReadyToStartContainers Status:True} {Type:Initialized Status:True} {Type:Ready Status:False} {Type:ContainersReady Status:True} {Type:PodScheduled Status:True}] Message: Reason: HostIP:192.168.39.217 PodIP:192.168.39.217 StartTime:2024-08-12 10:32:08 +0000 UTC ContainerStatuses:[{Name:kube-controller-manager State:{Waiting:<nil> Running:0xc001bdd1d0 Terminated:<nil>} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:0xc0005e80e0} Ready:true RestartCount:3 Image:registry.k8s.io/kube-controller-manager:v1.30.3 ImageID:docker-pullable://registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7 ContainerID:docker://b9857f8f48fd9b2fe2d5b4fb0bf07b34494306062ed04d0f86d679e06c79f31e}]}
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-470148 -n functional-470148
helpers_test.go:244: <<< TestFunctional/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-470148 logs -n 25: (1.100398391s)
helpers_test.go:252: TestFunctional/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| unpause | nospam-338210 --log_dir                                                  | nospam-338210     | jenkins | v1.33.1 | 12 Aug 24 10:28 UTC | 12 Aug 24 10:28 UTC |
	|         | /tmp/nospam-338210 unpause                                               |                   |         |         |                     |                     |
	| unpause | nospam-338210 --log_dir                                                  | nospam-338210     | jenkins | v1.33.1 | 12 Aug 24 10:28 UTC | 12 Aug 24 10:28 UTC |
	|         | /tmp/nospam-338210 unpause                                               |                   |         |         |                     |                     |
	| unpause | nospam-338210 --log_dir                                                  | nospam-338210     | jenkins | v1.33.1 | 12 Aug 24 10:28 UTC | 12 Aug 24 10:28 UTC |
	|         | /tmp/nospam-338210 unpause                                               |                   |         |         |                     |                     |
	| stop    | nospam-338210 --log_dir                                                  | nospam-338210     | jenkins | v1.33.1 | 12 Aug 24 10:28 UTC | 12 Aug 24 10:29 UTC |
	|         | /tmp/nospam-338210 stop                                                  |                   |         |         |                     |                     |
	| stop    | nospam-338210 --log_dir                                                  | nospam-338210     | jenkins | v1.33.1 | 12 Aug 24 10:29 UTC | 12 Aug 24 10:29 UTC |
	|         | /tmp/nospam-338210 stop                                                  |                   |         |         |                     |                     |
	| stop    | nospam-338210 --log_dir                                                  | nospam-338210     | jenkins | v1.33.1 | 12 Aug 24 10:29 UTC | 12 Aug 24 10:29 UTC |
	|         | /tmp/nospam-338210 stop                                                  |                   |         |         |                     |                     |
	| delete  | -p nospam-338210                                                         | nospam-338210     | jenkins | v1.33.1 | 12 Aug 24 10:29 UTC | 12 Aug 24 10:29 UTC |
	| start   | -p functional-470148                                                     | functional-470148 | jenkins | v1.33.1 | 12 Aug 24 10:29 UTC | 12 Aug 24 10:30 UTC |
	|         | --memory=4000                                                            |                   |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                   |         |         |                     |                     |
	|         | --wait=all --driver=kvm2                                                 |                   |         |         |                     |                     |
	| start   | -p functional-470148                                                     | functional-470148 | jenkins | v1.33.1 | 12 Aug 24 10:30 UTC | 12 Aug 24 10:31 UTC |
	|         | --alsologtostderr -v=8                                                   |                   |         |         |                     |                     |
	| cache   | functional-470148 cache add                                              | functional-470148 | jenkins | v1.33.1 | 12 Aug 24 10:31 UTC | 12 Aug 24 10:31 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |         |         |                     |                     |
	| cache   | functional-470148 cache add                                              | functional-470148 | jenkins | v1.33.1 | 12 Aug 24 10:31 UTC | 12 Aug 24 10:31 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |         |         |                     |                     |
	| cache   | functional-470148 cache add                                              | functional-470148 | jenkins | v1.33.1 | 12 Aug 24 10:31 UTC | 12 Aug 24 10:31 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | functional-470148 cache add                                              | functional-470148 | jenkins | v1.33.1 | 12 Aug 24 10:31 UTC | 12 Aug 24 10:31 UTC |
	|         | minikube-local-cache-test:functional-470148                              |                   |         |         |                     |                     |
	| cache   | functional-470148 cache delete                                           | functional-470148 | jenkins | v1.33.1 | 12 Aug 24 10:31 UTC | 12 Aug 24 10:31 UTC |
	|         | minikube-local-cache-test:functional-470148                              |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.33.1 | 12 Aug 24 10:31 UTC | 12 Aug 24 10:31 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |         |         |                     |                     |
	| cache   | list                                                                     | minikube          | jenkins | v1.33.1 | 12 Aug 24 10:31 UTC | 12 Aug 24 10:31 UTC |
	| ssh     | functional-470148 ssh sudo                                               | functional-470148 | jenkins | v1.33.1 | 12 Aug 24 10:31 UTC | 12 Aug 24 10:31 UTC |
	|         | crictl images                                                            |                   |         |         |                     |                     |
	| ssh     | functional-470148                                                        | functional-470148 | jenkins | v1.33.1 | 12 Aug 24 10:31 UTC | 12 Aug 24 10:31 UTC |
	|         | ssh sudo docker rmi                                                      |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| ssh     | functional-470148 ssh                                                    | functional-470148 | jenkins | v1.33.1 | 12 Aug 24 10:31 UTC |                     |
	|         | sudo crictl inspecti                                                     |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | functional-470148 cache reload                                           | functional-470148 | jenkins | v1.33.1 | 12 Aug 24 10:31 UTC | 12 Aug 24 10:31 UTC |
	| ssh     | functional-470148 ssh                                                    | functional-470148 | jenkins | v1.33.1 | 12 Aug 24 10:31 UTC | 12 Aug 24 10:31 UTC |
	|         | sudo crictl inspecti                                                     |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.33.1 | 12 Aug 24 10:31 UTC | 12 Aug 24 10:31 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.33.1 | 12 Aug 24 10:31 UTC | 12 Aug 24 10:31 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| kubectl | functional-470148 kubectl --                                             | functional-470148 | jenkins | v1.33.1 | 12 Aug 24 10:31 UTC | 12 Aug 24 10:31 UTC |
	|         | --context functional-470148                                              |                   |         |         |                     |                     |
	|         | get pods                                                                 |                   |         |         |                     |                     |
	| start   | -p functional-470148                                                     | functional-470148 | jenkins | v1.33.1 | 12 Aug 24 10:31 UTC | 12 Aug 24 10:33 UTC |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                   |         |         |                     |                     |
	|         | --wait=all                                                               |                   |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/12 10:31:49
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0812 10:31:49.324897   18189 out.go:291] Setting OutFile to fd 1 ...
	I0812 10:31:49.325160   18189 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 10:31:49.325171   18189 out.go:304] Setting ErrFile to fd 2...
	I0812 10:31:49.325175   18189 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 10:31:49.325334   18189 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19409-3796/.minikube/bin
	I0812 10:31:49.325852   18189 out.go:298] Setting JSON to false
	I0812 10:31:49.326796   18189 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":857,"bootTime":1723457852,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0812 10:31:49.326855   18189 start.go:139] virtualization: kvm guest
	I0812 10:31:49.328836   18189 out.go:177] * [functional-470148] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0812 10:31:49.330695   18189 out.go:177]   - MINIKUBE_LOCATION=19409
	I0812 10:31:49.330747   18189 notify.go:220] Checking for updates...
	I0812 10:31:49.333186   18189 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 10:31:49.334411   18189 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19409-3796/kubeconfig
	I0812 10:31:49.335670   18189 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19409-3796/.minikube
	I0812 10:31:49.336895   18189 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0812 10:31:49.338020   18189 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0812 10:31:49.339647   18189 config.go:182] Loaded profile config "functional-470148": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0812 10:31:49.339726   18189 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 10:31:49.340160   18189 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0812 10:31:49.340220   18189 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:31:49.354920   18189 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45375
	I0812 10:31:49.355331   18189 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:31:49.355906   18189 main.go:141] libmachine: Using API Version  1
	I0812 10:31:49.355927   18189 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:31:49.356253   18189 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:31:49.356436   18189 main.go:141] libmachine: (functional-470148) Calling .DriverName
	I0812 10:31:49.388065   18189 out.go:177] * Using the kvm2 driver based on existing profile
	I0812 10:31:49.389244   18189 start.go:297] selected driver: kvm2
	I0812 10:31:49.389251   18189 start.go:901] validating driver "kvm2" against &{Name:functional-470148 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:functional-470148 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 10:31:49.389339   18189 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0812 10:31:49.389645   18189 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 10:31:49.389702   18189 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19409-3796/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0812 10:31:49.404337   18189 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0812 10:31:49.405051   18189 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 10:31:49.405075   18189 cni.go:84] Creating CNI manager for ""
	I0812 10:31:49.405085   18189 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0812 10:31:49.405152   18189 start.go:340] cluster config:
	{Name:functional-470148 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-470148 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 10:31:49.405243   18189 iso.go:125] acquiring lock: {Name:mk12273493f47d7003f1469d85b691a3ad57d0c1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 10:31:49.407097   18189 out.go:177] * Starting "functional-470148" primary control-plane node in "functional-470148" cluster
	I0812 10:31:49.408179   18189 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0812 10:31:49.408216   18189 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19409-3796/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0812 10:31:49.408223   18189 cache.go:56] Caching tarball of preloaded images
	I0812 10:31:49.408325   18189 preload.go:172] Found /home/jenkins/minikube-integration/19409-3796/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0812 10:31:49.408335   18189 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0812 10:31:49.408428   18189 profile.go:143] Saving config to /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/functional-470148/config.json ...
	I0812 10:31:49.408610   18189 start.go:360] acquireMachinesLock for functional-470148: {Name:mkd191140573e797c993374d5c6ae46963c640c3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 10:31:49.408662   18189 start.go:364] duration metric: took 39.452µs to acquireMachinesLock for "functional-470148"
	I0812 10:31:49.408675   18189 start.go:96] Skipping create...Using existing machine configuration
	I0812 10:31:49.408680   18189 fix.go:54] fixHost starting: 
	I0812 10:31:49.408955   18189 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0812 10:31:49.408990   18189 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:31:49.423508   18189 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41525
	I0812 10:31:49.423924   18189 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:31:49.424399   18189 main.go:141] libmachine: Using API Version  1
	I0812 10:31:49.424420   18189 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:31:49.424712   18189 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:31:49.424864   18189 main.go:141] libmachine: (functional-470148) Calling .DriverName
	I0812 10:31:49.424989   18189 main.go:141] libmachine: (functional-470148) Calling .GetState
	I0812 10:31:49.426548   18189 fix.go:112] recreateIfNeeded on functional-470148: state=Running err=<nil>
	W0812 10:31:49.426563   18189 fix.go:138] unexpected machine state, will restart: <nil>
	I0812 10:31:49.428304   18189 out.go:177] * Updating the running kvm2 "functional-470148" VM ...
	I0812 10:31:49.429490   18189 machine.go:94] provisionDockerMachine start ...
	I0812 10:31:49.429504   18189 main.go:141] libmachine: (functional-470148) Calling .DriverName
	I0812 10:31:49.429707   18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHHostname
	I0812 10:31:49.431654   18189 main.go:141] libmachine: (functional-470148) DBG | domain functional-470148 has defined MAC address 52:54:00:f9:38:04 in network mk-functional-470148
	I0812 10:31:49.431981   18189 main.go:141] libmachine: (functional-470148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:38:04", ip: ""} in network mk-functional-470148: {Iface:virbr1 ExpiryTime:2024-08-12 11:29:27 +0000 UTC Type:0 Mac:52:54:00:f9:38:04 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:functional-470148 Clientid:01:52:54:00:f9:38:04}
	I0812 10:31:49.432003   18189 main.go:141] libmachine: (functional-470148) DBG | domain functional-470148 has defined IP address 192.168.39.217 and MAC address 52:54:00:f9:38:04 in network mk-functional-470148
	I0812 10:31:49.432120   18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHPort
	I0812 10:31:49.432262   18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHKeyPath
	I0812 10:31:49.432435   18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHKeyPath
	I0812 10:31:49.432571   18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHUsername
	I0812 10:31:49.432762   18189 main.go:141] libmachine: Using SSH client type: native
	I0812 10:31:49.432931   18189 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0812 10:31:49.432936   18189 main.go:141] libmachine: About to run SSH command:
	hostname
	I0812 10:31:49.542627   18189 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-470148
	
	I0812 10:31:49.542642   18189 main.go:141] libmachine: (functional-470148) Calling .GetMachineName
	I0812 10:31:49.542958   18189 buildroot.go:166] provisioning hostname "functional-470148"
	I0812 10:31:49.543021   18189 main.go:141] libmachine: (functional-470148) Calling .GetMachineName
	I0812 10:31:49.543236   18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHHostname
	I0812 10:31:49.546008   18189 main.go:141] libmachine: (functional-470148) DBG | domain functional-470148 has defined MAC address 52:54:00:f9:38:04 in network mk-functional-470148
	I0812 10:31:49.546359   18189 main.go:141] libmachine: (functional-470148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:38:04", ip: ""} in network mk-functional-470148: {Iface:virbr1 ExpiryTime:2024-08-12 11:29:27 +0000 UTC Type:0 Mac:52:54:00:f9:38:04 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:functional-470148 Clientid:01:52:54:00:f9:38:04}
	I0812 10:31:49.546380   18189 main.go:141] libmachine: (functional-470148) DBG | domain functional-470148 has defined IP address 192.168.39.217 and MAC address 52:54:00:f9:38:04 in network mk-functional-470148
	I0812 10:31:49.546531   18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHPort
	I0812 10:31:49.546691   18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHKeyPath
	I0812 10:31:49.546805   18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHKeyPath
	I0812 10:31:49.546910   18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHUsername
	I0812 10:31:49.547049   18189 main.go:141] libmachine: Using SSH client type: native
	I0812 10:31:49.547244   18189 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0812 10:31:49.547254   18189 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-470148 && echo "functional-470148" | sudo tee /etc/hostname
	I0812 10:31:49.674258   18189 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-470148
	
	I0812 10:31:49.674277   18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHHostname
	I0812 10:31:49.677173   18189 main.go:141] libmachine: (functional-470148) DBG | domain functional-470148 has defined MAC address 52:54:00:f9:38:04 in network mk-functional-470148
	I0812 10:31:49.677662   18189 main.go:141] libmachine: (functional-470148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:38:04", ip: ""} in network mk-functional-470148: {Iface:virbr1 ExpiryTime:2024-08-12 11:29:27 +0000 UTC Type:0 Mac:52:54:00:f9:38:04 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:functional-470148 Clientid:01:52:54:00:f9:38:04}
	I0812 10:31:49.677684   18189 main.go:141] libmachine: (functional-470148) DBG | domain functional-470148 has defined IP address 192.168.39.217 and MAC address 52:54:00:f9:38:04 in network mk-functional-470148
	I0812 10:31:49.678004   18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHPort
	I0812 10:31:49.678243   18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHKeyPath
	I0812 10:31:49.678480   18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHKeyPath
	I0812 10:31:49.678730   18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHUsername
	I0812 10:31:49.678940   18189 main.go:141] libmachine: Using SSH client type: native
	I0812 10:31:49.679137   18189 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0812 10:31:49.679148   18189 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-470148' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-470148/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-470148' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0812 10:31:49.791345   18189 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0812 10:31:49.791362   18189 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19409-3796/.minikube CaCertPath:/home/jenkins/minikube-integration/19409-3796/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19409-3796/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19409-3796/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19409-3796/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19409-3796/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19409-3796/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19409-3796/.minikube}
	I0812 10:31:49.791413   18189 buildroot.go:174] setting up certificates
	I0812 10:31:49.791424   18189 provision.go:84] configureAuth start
	I0812 10:31:49.791432   18189 main.go:141] libmachine: (functional-470148) Calling .GetMachineName
	I0812 10:31:49.791733   18189 main.go:141] libmachine: (functional-470148) Calling .GetIP
	I0812 10:31:49.794371   18189 main.go:141] libmachine: (functional-470148) DBG | domain functional-470148 has defined MAC address 52:54:00:f9:38:04 in network mk-functional-470148
	I0812 10:31:49.794679   18189 main.go:141] libmachine: (functional-470148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:38:04", ip: ""} in network mk-functional-470148: {Iface:virbr1 ExpiryTime:2024-08-12 11:29:27 +0000 UTC Type:0 Mac:52:54:00:f9:38:04 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:functional-470148 Clientid:01:52:54:00:f9:38:04}
	I0812 10:31:49.794701   18189 main.go:141] libmachine: (functional-470148) DBG | domain functional-470148 has defined IP address 192.168.39.217 and MAC address 52:54:00:f9:38:04 in network mk-functional-470148
	I0812 10:31:49.794820   18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHHostname
	I0812 10:31:49.796847   18189 main.go:141] libmachine: (functional-470148) DBG | domain functional-470148 has defined MAC address 52:54:00:f9:38:04 in network mk-functional-470148
	I0812 10:31:49.797142   18189 main.go:141] libmachine: (functional-470148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:38:04", ip: ""} in network mk-functional-470148: {Iface:virbr1 ExpiryTime:2024-08-12 11:29:27 +0000 UTC Type:0 Mac:52:54:00:f9:38:04 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:functional-470148 Clientid:01:52:54:00:f9:38:04}
	I0812 10:31:49.797165   18189 main.go:141] libmachine: (functional-470148) DBG | domain functional-470148 has defined IP address 192.168.39.217 and MAC address 52:54:00:f9:38:04 in network mk-functional-470148
	I0812 10:31:49.797257   18189 provision.go:143] copyHostCerts
	I0812 10:31:49.797321   18189 exec_runner.go:144] found /home/jenkins/minikube-integration/19409-3796/.minikube/ca.pem, removing ...
	I0812 10:31:49.797326   18189 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19409-3796/.minikube/ca.pem
	I0812 10:31:49.797397   18189 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19409-3796/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19409-3796/.minikube/ca.pem (1078 bytes)
	I0812 10:31:49.797498   18189 exec_runner.go:144] found /home/jenkins/minikube-integration/19409-3796/.minikube/cert.pem, removing ...
	I0812 10:31:49.797502   18189 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19409-3796/.minikube/cert.pem
	I0812 10:31:49.797527   18189 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19409-3796/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19409-3796/.minikube/cert.pem (1123 bytes)
	I0812 10:31:49.797585   18189 exec_runner.go:144] found /home/jenkins/minikube-integration/19409-3796/.minikube/key.pem, removing ...
	I0812 10:31:49.797588   18189 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19409-3796/.minikube/key.pem
	I0812 10:31:49.797607   18189 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19409-3796/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19409-3796/.minikube/key.pem (1679 bytes)
	I0812 10:31:49.797660   18189 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19409-3796/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19409-3796/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19409-3796/.minikube/certs/ca-key.pem org=jenkins.functional-470148 san=[127.0.0.1 192.168.39.217 functional-470148 localhost minikube]
	I0812 10:31:50.182597   18189 provision.go:177] copyRemoteCerts
	I0812 10:31:50.182645   18189 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0812 10:31:50.182679   18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHHostname
	I0812 10:31:50.186066   18189 main.go:141] libmachine: (functional-470148) DBG | domain functional-470148 has defined MAC address 52:54:00:f9:38:04 in network mk-functional-470148
	I0812 10:31:50.186332   18189 main.go:141] libmachine: (functional-470148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:38:04", ip: ""} in network mk-functional-470148: {Iface:virbr1 ExpiryTime:2024-08-12 11:29:27 +0000 UTC Type:0 Mac:52:54:00:f9:38:04 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:functional-470148 Clientid:01:52:54:00:f9:38:04}
	I0812 10:31:50.186354   18189 main.go:141] libmachine: (functional-470148) DBG | domain functional-470148 has defined IP address 192.168.39.217 and MAC address 52:54:00:f9:38:04 in network mk-functional-470148
	I0812 10:31:50.186542   18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHPort
	I0812 10:31:50.186758   18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHKeyPath
	I0812 10:31:50.186897   18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHUsername
	I0812 10:31:50.187012   18189 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3796/.minikube/machines/functional-470148/id_rsa Username:docker}
	I0812 10:31:50.285851   18189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3796/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0812 10:31:50.322845   18189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3796/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0812 10:31:50.354164   18189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3796/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0812 10:31:50.394061   18189 provision.go:87] duration metric: took 602.59553ms to configureAuth
	I0812 10:31:50.394082   18189 buildroot.go:189] setting minikube options for container-runtime
	I0812 10:31:50.394290   18189 config.go:182] Loaded profile config "functional-470148": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0812 10:31:50.394325   18189 main.go:141] libmachine: (functional-470148) Calling .DriverName
	I0812 10:31:50.394636   18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHHostname
	I0812 10:31:50.397240   18189 main.go:141] libmachine: (functional-470148) DBG | domain functional-470148 has defined MAC address 52:54:00:f9:38:04 in network mk-functional-470148
	I0812 10:31:50.397628   18189 main.go:141] libmachine: (functional-470148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:38:04", ip: ""} in network mk-functional-470148: {Iface:virbr1 ExpiryTime:2024-08-12 11:29:27 +0000 UTC Type:0 Mac:52:54:00:f9:38:04 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:functional-470148 Clientid:01:52:54:00:f9:38:04}
	I0812 10:31:50.397652   18189 main.go:141] libmachine: (functional-470148) DBG | domain functional-470148 has defined IP address 192.168.39.217 and MAC address 52:54:00:f9:38:04 in network mk-functional-470148
	I0812 10:31:50.397805   18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHPort
	I0812 10:31:50.398012   18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHKeyPath
	I0812 10:31:50.398165   18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHKeyPath
	I0812 10:31:50.398289   18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHUsername
	I0812 10:31:50.398414   18189 main.go:141] libmachine: Using SSH client type: native
	I0812 10:31:50.398613   18189 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0812 10:31:50.398619   18189 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0812 10:31:50.524165   18189 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0812 10:31:50.524180   18189 buildroot.go:70] root file system type: tmpfs
	I0812 10:31:50.524277   18189 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0812 10:31:50.524289   18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHHostname
	I0812 10:31:50.526935   18189 main.go:141] libmachine: (functional-470148) DBG | domain functional-470148 has defined MAC address 52:54:00:f9:38:04 in network mk-functional-470148
	I0812 10:31:50.527187   18189 main.go:141] libmachine: (functional-470148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:38:04", ip: ""} in network mk-functional-470148: {Iface:virbr1 ExpiryTime:2024-08-12 11:29:27 +0000 UTC Type:0 Mac:52:54:00:f9:38:04 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:functional-470148 Clientid:01:52:54:00:f9:38:04}
	I0812 10:31:50.527208   18189 main.go:141] libmachine: (functional-470148) DBG | domain functional-470148 has defined IP address 192.168.39.217 and MAC address 52:54:00:f9:38:04 in network mk-functional-470148
	I0812 10:31:50.527413   18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHPort
	I0812 10:31:50.527622   18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHKeyPath
	I0812 10:31:50.527816   18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHKeyPath
	I0812 10:31:50.527990   18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHUsername
	I0812 10:31:50.528143   18189 main.go:141] libmachine: Using SSH client type: native
	I0812 10:31:50.528325   18189 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0812 10:31:50.528378   18189 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0812 10:31:50.657878   18189 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0812 10:31:50.657909   18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHHostname
	I0812 10:31:50.660749   18189 main.go:141] libmachine: (functional-470148) DBG | domain functional-470148 has defined MAC address 52:54:00:f9:38:04 in network mk-functional-470148
	I0812 10:31:50.661164   18189 main.go:141] libmachine: (functional-470148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:38:04", ip: ""} in network mk-functional-470148: {Iface:virbr1 ExpiryTime:2024-08-12 11:29:27 +0000 UTC Type:0 Mac:52:54:00:f9:38:04 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:functional-470148 Clientid:01:52:54:00:f9:38:04}
	I0812 10:31:50.661188   18189 main.go:141] libmachine: (functional-470148) DBG | domain functional-470148 has defined IP address 192.168.39.217 and MAC address 52:54:00:f9:38:04 in network mk-functional-470148
	I0812 10:31:50.661364   18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHPort
	I0812 10:31:50.661564   18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHKeyPath
	I0812 10:31:50.661712   18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHKeyPath
	I0812 10:31:50.661843   18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHUsername
	I0812 10:31:50.661965   18189 main.go:141] libmachine: Using SSH client type: native
	I0812 10:31:50.662173   18189 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0812 10:31:50.662184   18189 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0812 10:31:50.790209   18189 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0812 10:31:50.790225   18189 machine.go:97] duration metric: took 1.360727547s to provisionDockerMachine
	I0812 10:31:50.790236   18189 start.go:293] postStartSetup for "functional-470148" (driver="kvm2")
	I0812 10:31:50.790245   18189 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0812 10:31:50.790267   18189 main.go:141] libmachine: (functional-470148) Calling .DriverName
	I0812 10:31:50.790633   18189 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0812 10:31:50.790662   18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHHostname
	I0812 10:31:50.795653   18189 main.go:141] libmachine: (functional-470148) DBG | domain functional-470148 has defined MAC address 52:54:00:f9:38:04 in network mk-functional-470148
	I0812 10:31:50.796211   18189 main.go:141] libmachine: (functional-470148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:38:04", ip: ""} in network mk-functional-470148: {Iface:virbr1 ExpiryTime:2024-08-12 11:29:27 +0000 UTC Type:0 Mac:52:54:00:f9:38:04 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:functional-470148 Clientid:01:52:54:00:f9:38:04}
	I0812 10:31:50.796227   18189 main.go:141] libmachine: (functional-470148) DBG | domain functional-470148 has defined IP address 192.168.39.217 and MAC address 52:54:00:f9:38:04 in network mk-functional-470148
	I0812 10:31:50.796625   18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHPort
	I0812 10:31:50.796952   18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHKeyPath
	I0812 10:31:50.797278   18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHUsername
	I0812 10:31:50.797494   18189 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3796/.minikube/machines/functional-470148/id_rsa Username:docker}
	I0812 10:31:50.886234   18189 ssh_runner.go:195] Run: cat /etc/os-release
	I0812 10:31:50.891780   18189 info.go:137] Remote host: Buildroot 2023.02.9
	I0812 10:31:50.891805   18189 filesync.go:126] Scanning /home/jenkins/minikube-integration/19409-3796/.minikube/addons for local assets ...
	I0812 10:31:50.891939   18189 filesync.go:126] Scanning /home/jenkins/minikube-integration/19409-3796/.minikube/files for local assets ...
	I0812 10:31:50.892017   18189 filesync.go:149] local asset: /home/jenkins/minikube-integration/19409-3796/.minikube/files/etc/ssl/certs/109682.pem -> 109682.pem in /etc/ssl/certs
	I0812 10:31:50.892089   18189 filesync.go:149] local asset: /home/jenkins/minikube-integration/19409-3796/.minikube/files/etc/test/nested/copy/10968/hosts -> hosts in /etc/test/nested/copy/10968
	I0812 10:31:50.892140   18189 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/10968
	I0812 10:31:50.905798   18189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3796/.minikube/files/etc/ssl/certs/109682.pem --> /etc/ssl/certs/109682.pem (1708 bytes)
	I0812 10:31:50.940801   18189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3796/.minikube/files/etc/test/nested/copy/10968/hosts --> /etc/test/nested/copy/10968/hosts (40 bytes)
	I0812 10:31:50.974998   18189 start.go:296] duration metric: took 184.748014ms for postStartSetup
	I0812 10:31:50.975030   18189 fix.go:56] duration metric: took 1.566350639s for fixHost
	I0812 10:31:50.975049   18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHHostname
	I0812 10:31:50.977953   18189 main.go:141] libmachine: (functional-470148) DBG | domain functional-470148 has defined MAC address 52:54:00:f9:38:04 in network mk-functional-470148
	I0812 10:31:50.978460   18189 main.go:141] libmachine: (functional-470148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:38:04", ip: ""} in network mk-functional-470148: {Iface:virbr1 ExpiryTime:2024-08-12 11:29:27 +0000 UTC Type:0 Mac:52:54:00:f9:38:04 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:functional-470148 Clientid:01:52:54:00:f9:38:04}
	I0812 10:31:50.978484   18189 main.go:141] libmachine: (functional-470148) DBG | domain functional-470148 has defined IP address 192.168.39.217 and MAC address 52:54:00:f9:38:04 in network mk-functional-470148
	I0812 10:31:50.978696   18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHPort
	I0812 10:31:50.978895   18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHKeyPath
	I0812 10:31:50.979045   18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHKeyPath
	I0812 10:31:50.979195   18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHUsername
	I0812 10:31:50.979353   18189 main.go:141] libmachine: Using SSH client type: native
	I0812 10:31:50.979516   18189 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0812 10:31:50.979522   18189 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0812 10:31:51.095978   18189 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723458711.073155213
	
	I0812 10:31:51.095994   18189 fix.go:216] guest clock: 1723458711.073155213
	I0812 10:31:51.096003   18189 fix.go:229] Guest: 2024-08-12 10:31:51.073155213 +0000 UTC Remote: 2024-08-12 10:31:50.975032818 +0000 UTC m=+1.686663349 (delta=98.122395ms)
	I0812 10:31:51.096052   18189 fix.go:200] guest clock delta is within tolerance: 98.122395ms
	I0812 10:31:51.096057   18189 start.go:83] releasing machines lock for "functional-470148", held for 1.687388646s
	I0812 10:31:51.096074   18189 main.go:141] libmachine: (functional-470148) Calling .DriverName
	I0812 10:31:51.096332   18189 main.go:141] libmachine: (functional-470148) Calling .GetIP
	I0812 10:31:51.099313   18189 main.go:141] libmachine: (functional-470148) DBG | domain functional-470148 has defined MAC address 52:54:00:f9:38:04 in network mk-functional-470148
	I0812 10:31:51.099686   18189 main.go:141] libmachine: (functional-470148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:38:04", ip: ""} in network mk-functional-470148: {Iface:virbr1 ExpiryTime:2024-08-12 11:29:27 +0000 UTC Type:0 Mac:52:54:00:f9:38:04 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:functional-470148 Clientid:01:52:54:00:f9:38:04}
	I0812 10:31:51.099711   18189 main.go:141] libmachine: (functional-470148) DBG | domain functional-470148 has defined IP address 192.168.39.217 and MAC address 52:54:00:f9:38:04 in network mk-functional-470148
	I0812 10:31:51.099933   18189 main.go:141] libmachine: (functional-470148) Calling .DriverName
	I0812 10:31:51.100581   18189 main.go:141] libmachine: (functional-470148) Calling .DriverName
	I0812 10:31:51.100755   18189 main.go:141] libmachine: (functional-470148) Calling .DriverName
	I0812 10:31:51.100846   18189 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0812 10:31:51.100881   18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHHostname
	I0812 10:31:51.101019   18189 ssh_runner.go:195] Run: cat /version.json
	I0812 10:31:51.101037   18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHHostname
	I0812 10:31:51.103680   18189 main.go:141] libmachine: (functional-470148) DBG | domain functional-470148 has defined MAC address 52:54:00:f9:38:04 in network mk-functional-470148
	I0812 10:31:51.103984   18189 main.go:141] libmachine: (functional-470148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:38:04", ip: ""} in network mk-functional-470148: {Iface:virbr1 ExpiryTime:2024-08-12 11:29:27 +0000 UTC Type:0 Mac:52:54:00:f9:38:04 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:functional-470148 Clientid:01:52:54:00:f9:38:04}
	I0812 10:31:51.104007   18189 main.go:141] libmachine: (functional-470148) DBG | domain functional-470148 has defined IP address 192.168.39.217 and MAC address 52:54:00:f9:38:04 in network mk-functional-470148
	I0812 10:31:51.104043   18189 main.go:141] libmachine: (functional-470148) DBG | domain functional-470148 has defined MAC address 52:54:00:f9:38:04 in network mk-functional-470148
	I0812 10:31:51.104176   18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHPort
	I0812 10:31:51.104396   18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHKeyPath
	I0812 10:31:51.104442   18189 main.go:141] libmachine: (functional-470148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:38:04", ip: ""} in network mk-functional-470148: {Iface:virbr1 ExpiryTime:2024-08-12 11:29:27 +0000 UTC Type:0 Mac:52:54:00:f9:38:04 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:functional-470148 Clientid:01:52:54:00:f9:38:04}
	I0812 10:31:51.104458   18189 main.go:141] libmachine: (functional-470148) DBG | domain functional-470148 has defined IP address 192.168.39.217 and MAC address 52:54:00:f9:38:04 in network mk-functional-470148
	I0812 10:31:51.104562   18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHPort
	I0812 10:31:51.104566   18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHUsername
	I0812 10:31:51.104704   18189 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3796/.minikube/machines/functional-470148/id_rsa Username:docker}
	I0812 10:31:51.104910   18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHKeyPath
	I0812 10:31:51.105113   18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHUsername
	I0812 10:31:51.105300   18189 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3796/.minikube/machines/functional-470148/id_rsa Username:docker}
	I0812 10:31:51.208611   18189 ssh_runner.go:195] Run: systemctl --version
	I0812 10:31:51.216618   18189 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0812 10:31:51.223237   18189 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0812 10:31:51.223301   18189 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0812 10:31:51.236127   18189 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0812 10:31:51.236147   18189 start.go:495] detecting cgroup driver to use...
	I0812 10:31:51.236285   18189 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0812 10:31:51.258092   18189 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0812 10:31:51.270753   18189 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0812 10:31:51.288025   18189 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0812 10:31:51.288100   18189 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0812 10:31:51.301819   18189 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0812 10:31:51.314767   18189 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0812 10:31:51.328640   18189 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0812 10:31:51.342558   18189 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0812 10:31:51.355537   18189 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0812 10:31:51.369032   18189 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0812 10:31:51.382663   18189 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0812 10:31:51.396374   18189 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0812 10:31:51.407743   18189 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0812 10:31:51.420076   18189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 10:31:51.610870   18189 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0812 10:31:51.641354   18189 start.go:495] detecting cgroup driver to use...
	I0812 10:31:51.641450   18189 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0812 10:31:51.662164   18189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0812 10:31:51.681893   18189 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0812 10:31:51.704062   18189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0812 10:31:51.721662   18189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0812 10:31:51.738746   18189 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0812 10:31:51.762088   18189 ssh_runner.go:195] Run: which cri-dockerd
	I0812 10:31:51.766902   18189 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0812 10:31:51.788580   18189 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0812 10:31:51.810629   18189 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0812 10:31:51.999368   18189 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0812 10:31:52.167938   18189 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0812 10:31:52.168095   18189 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0812 10:31:52.206776   18189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 10:31:52.370822   18189 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0812 10:32:05.144143   18189 ssh_runner.go:235] Completed: sudo systemctl restart docker: (12.77329475s)
	I0812 10:32:05.144204   18189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0812 10:32:05.172772   18189 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0812 10:32:05.198016   18189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0812 10:32:05.214736   18189 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0812 10:32:05.350641   18189 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0812 10:32:05.490156   18189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 10:32:05.622675   18189 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0812 10:32:05.642455   18189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0812 10:32:05.657762   18189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 10:32:05.788401   18189 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0812 10:32:05.907094   18189 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0812 10:32:05.907154   18189 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0812 10:32:05.914368   18189 start.go:563] Will wait 60s for crictl version
	I0812 10:32:05.914440   18189 ssh_runner.go:195] Run: which crictl
	I0812 10:32:05.919064   18189 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0812 10:32:05.958060   18189 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.1
	RuntimeApiVersion:  v1
	I0812 10:32:05.958144   18189 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0812 10:32:05.988263   18189 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0812 10:32:06.017116   18189 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
	I0812 10:32:06.017162   18189 main.go:141] libmachine: (functional-470148) Calling .GetIP
	I0812 10:32:06.020221   18189 main.go:141] libmachine: (functional-470148) DBG | domain functional-470148 has defined MAC address 52:54:00:f9:38:04 in network mk-functional-470148
	I0812 10:32:06.020577   18189 main.go:141] libmachine: (functional-470148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:38:04", ip: ""} in network mk-functional-470148: {Iface:virbr1 ExpiryTime:2024-08-12 11:29:27 +0000 UTC Type:0 Mac:52:54:00:f9:38:04 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:functional-470148 Clientid:01:52:54:00:f9:38:04}
	I0812 10:32:06.020614   18189 main.go:141] libmachine: (functional-470148) DBG | domain functional-470148 has defined IP address 192.168.39.217 and MAC address 52:54:00:f9:38:04 in network mk-functional-470148
	I0812 10:32:06.020902   18189 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0812 10:32:06.027401   18189 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0812 10:32:06.028727   18189 kubeadm.go:883] updating cluster {Name:functional-470148 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:functional-470148 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false M
ountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0812 10:32:06.028855   18189 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0812 10:32:06.028914   18189 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0812 10:32:06.049585   18189 docker.go:685] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-470148
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I0812 10:32:06.049598   18189 docker.go:615] Images already preloaded, skipping extraction
	I0812 10:32:06.049665   18189 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0812 10:32:06.070440   18189 docker.go:685] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-470148
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I0812 10:32:06.070455   18189 cache_images.go:84] Images are preloaded, skipping loading
	I0812 10:32:06.070467   18189 kubeadm.go:934] updating node { 192.168.39.217 8441 v1.30.3 docker true true} ...
	I0812 10:32:06.070597   18189 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-470148 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.217
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:functional-470148 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0812 10:32:06.070666   18189 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0812 10:32:06.140616   18189 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0812 10:32:06.140724   18189 cni.go:84] Creating CNI manager for ""
	I0812 10:32:06.140749   18189 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0812 10:32:06.140823   18189 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0812 10:32:06.140900   18189 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.217 APIServerPort:8441 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-470148 NodeName:functional-470148 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.217"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.217 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfi
gOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0812 10:32:06.141140   18189 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.217
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-470148"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.217
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.217"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0812 10:32:06.141284   18189 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0812 10:32:06.152886   18189 binaries.go:44] Found k8s binaries, skipping transfer
	I0812 10:32:06.152951   18189 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0812 10:32:06.163458   18189 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0812 10:32:06.183318   18189 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0812 10:32:06.203078   18189 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2015 bytes)
	I0812 10:32:06.224599   18189 ssh_runner.go:195] Run: grep 192.168.39.217	control-plane.minikube.internal$ /etc/hosts
	I0812 10:32:06.229358   18189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 10:32:06.357393   18189 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0812 10:32:06.373737   18189 certs.go:68] Setting up /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/functional-470148 for IP: 192.168.39.217
	I0812 10:32:06.373753   18189 certs.go:194] generating shared ca certs ...
	I0812 10:32:06.373773   18189 certs.go:226] acquiring lock for ca certs: {Name:mkadbb95e03b53e6a3c34b2efd2db9368412cbc1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 10:32:06.373942   18189 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19409-3796/.minikube/ca.key
	I0812 10:32:06.373986   18189 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19409-3796/.minikube/proxy-client-ca.key
	I0812 10:32:06.373992   18189 certs.go:256] generating profile certs ...
	I0812 10:32:06.374103   18189 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/functional-470148/client.key
	I0812 10:32:06.374158   18189 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/functional-470148/apiserver.key.883b791d
	I0812 10:32:06.374193   18189 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/functional-470148/proxy-client.key
	I0812 10:32:06.374308   18189 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3796/.minikube/certs/10968.pem (1338 bytes)
	W0812 10:32:06.374333   18189 certs.go:480] ignoring /home/jenkins/minikube-integration/19409-3796/.minikube/certs/10968_empty.pem, impossibly tiny 0 bytes
	I0812 10:32:06.374339   18189 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3796/.minikube/certs/ca-key.pem (1679 bytes)
	I0812 10:32:06.374357   18189 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3796/.minikube/certs/ca.pem (1078 bytes)
	I0812 10:32:06.374377   18189 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3796/.minikube/certs/cert.pem (1123 bytes)
	I0812 10:32:06.374400   18189 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3796/.minikube/certs/key.pem (1679 bytes)
	I0812 10:32:06.374434   18189 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3796/.minikube/files/etc/ssl/certs/109682.pem (1708 bytes)
	I0812 10:32:06.375049   18189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3796/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0812 10:32:06.401316   18189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3796/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0812 10:32:06.426749   18189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3796/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0812 10:32:06.453208   18189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3796/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0812 10:32:06.479475   18189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/functional-470148/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0812 10:32:06.507750   18189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/functional-470148/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0812 10:32:06.534244   18189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/functional-470148/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0812 10:32:06.562226   18189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/functional-470148/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0812 10:32:06.589396   18189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3796/.minikube/certs/10968.pem --> /usr/share/ca-certificates/10968.pem (1338 bytes)
	I0812 10:32:06.622864   18189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3796/.minikube/files/etc/ssl/certs/109682.pem --> /usr/share/ca-certificates/109682.pem (1708 bytes)
	I0812 10:32:06.654544   18189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3796/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0812 10:32:06.684822   18189 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0812 10:32:06.705593   18189 ssh_runner.go:195] Run: openssl version
	I0812 10:32:06.712399   18189 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10968.pem && ln -fs /usr/share/ca-certificates/10968.pem /etc/ssl/certs/10968.pem"
	I0812 10:32:06.725189   18189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10968.pem
	I0812 10:32:06.730272   18189 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 12 10:29 /usr/share/ca-certificates/10968.pem
	I0812 10:32:06.730321   18189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10968.pem
	I0812 10:32:06.737236   18189 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10968.pem /etc/ssl/certs/51391683.0"
	I0812 10:32:06.748481   18189 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109682.pem && ln -fs /usr/share/ca-certificates/109682.pem /etc/ssl/certs/109682.pem"
	I0812 10:32:06.760868   18189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109682.pem
	I0812 10:32:06.766375   18189 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 12 10:29 /usr/share/ca-certificates/109682.pem
	I0812 10:32:06.766428   18189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109682.pem
	I0812 10:32:06.773277   18189 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109682.pem /etc/ssl/certs/3ec20f2e.0"
	I0812 10:32:06.784269   18189 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0812 10:32:06.798184   18189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0812 10:32:06.803789   18189 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 12 10:21 /usr/share/ca-certificates/minikubeCA.pem
	I0812 10:32:06.803885   18189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0812 10:32:06.810957   18189 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0812 10:32:06.822279   18189 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0812 10:32:06.828266   18189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0812 10:32:06.834643   18189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0812 10:32:06.841220   18189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0812 10:32:06.847703   18189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0812 10:32:06.854533   18189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0812 10:32:06.861525   18189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0812 10:32:06.868440   18189 kubeadm.go:392] StartCluster: {Name:functional-470148 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:functional-470148 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false Moun
tString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 10:32:06.868572   18189 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0812 10:32:06.888257   18189 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0812 10:32:06.900277   18189 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0812 10:32:06.900289   18189 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0812 10:32:06.900369   18189 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0812 10:32:06.912580   18189 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0812 10:32:06.913131   18189 kubeconfig.go:125] found "functional-470148" server: "https://192.168.39.217:8441"
	I0812 10:32:06.914379   18189 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0812 10:32:06.927747   18189 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -22,7 +22,7 @@
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.39.217"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    enable-admission-plugins: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     allocate-node-cidrs: "true"
	
	-- /stdout --
	I0812 10:32:06.927756   18189 kubeadm.go:1160] stopping kube-system containers ...
	I0812 10:32:06.927815   18189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0812 10:32:06.952926   18189 docker.go:483] Stopping containers: [4f0c8adf0dda 1f1124951798 16616cb9ce5d 6847d5bfe08c 6cd4ba5fbd18 ba1224227c45 7bdc8c688102 a82fb1fec552 b318d7a1a722 efdfc20ff005 e46ea15b50bc 4360cfb87e38 4051e49c8f5a d6db8459618c d8221a352bff 6da3427c4816 dc11fc027362 4ee2b1cf700c ee9b9294facf 297a7221af7c 06a49bcd2956 d0e7a3e717da 8d3b18401964 61af2576b926 aabf8fa23d86 8871e6806b3a 99e71abeb7cb 50aafe7542ee 93a24bdd7dba]
	I0812 10:32:06.953014   18189 ssh_runner.go:195] Run: docker stop 4f0c8adf0dda 1f1124951798 16616cb9ce5d 6847d5bfe08c 6cd4ba5fbd18 ba1224227c45 7bdc8c688102 a82fb1fec552 b318d7a1a722 efdfc20ff005 e46ea15b50bc 4360cfb87e38 4051e49c8f5a d6db8459618c d8221a352bff 6da3427c4816 dc11fc027362 4ee2b1cf700c ee9b9294facf 297a7221af7c 06a49bcd2956 d0e7a3e717da 8d3b18401964 61af2576b926 aabf8fa23d86 8871e6806b3a 99e71abeb7cb 50aafe7542ee 93a24bdd7dba
	I0812 10:32:06.979499   18189 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0812 10:32:07.022821   18189 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0812 10:32:07.034531   18189 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5651 Aug 12 10:29 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5654 Aug 12 10:31 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Aug 12 10:30 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5602 Aug 12 10:31 /etc/kubernetes/scheduler.conf
	
	I0812 10:32:07.034586   18189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0812 10:32:07.044679   18189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0812 10:32:07.055419   18189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0812 10:32:07.066575   18189 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0812 10:32:07.066629   18189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0812 10:32:07.078097   18189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0812 10:32:07.088829   18189 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0812 10:32:07.088879   18189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0812 10:32:07.099853   18189 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0812 10:32:07.110821   18189 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0812 10:32:07.180503   18189 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0812 10:32:07.939973   18189 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0812 10:32:08.173314   18189 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0812 10:32:08.277495   18189 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0812 10:32:08.444493   18189 api_server.go:52] waiting for apiserver process to appear ...
	I0812 10:32:08.444582   18189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 10:32:08.944994   18189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 10:32:09.444618   18189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 10:32:09.944782   18189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 10:32:09.963322   18189 api_server.go:72] duration metric: took 1.518828441s to wait for apiserver process to appear ...
	I0812 10:32:09.963337   18189 api_server.go:88] waiting for apiserver healthz status ...
	I0812 10:32:09.963366   18189 api_server.go:253] Checking apiserver healthz at https://192.168.39.217:8441/healthz ...
	I0812 10:32:13.151731   18189 api_server.go:279] https://192.168.39.217:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0812 10:32:13.151753   18189 api_server.go:103] status: https://192.168.39.217:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0812 10:32:13.151776   18189 api_server.go:253] Checking apiserver healthz at https://192.168.39.217:8441/healthz ...
	I0812 10:32:13.189667   18189 api_server.go:279] https://192.168.39.217:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0812 10:32:13.189685   18189 api_server.go:103] status: https://192.168.39.217:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0812 10:32:13.464272   18189 api_server.go:253] Checking apiserver healthz at https://192.168.39.217:8441/healthz ...
	I0812 10:32:13.469514   18189 api_server.go:279] https://192.168.39.217:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0812 10:32:13.469536   18189 api_server.go:103] status: https://192.168.39.217:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0812 10:32:13.964362   18189 api_server.go:253] Checking apiserver healthz at https://192.168.39.217:8441/healthz ...
	I0812 10:32:13.969990   18189 api_server.go:279] https://192.168.39.217:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0812 10:32:13.970015   18189 api_server.go:103] status: https://192.168.39.217:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0812 10:32:14.463965   18189 api_server.go:253] Checking apiserver healthz at https://192.168.39.217:8441/healthz ...
	I0812 10:32:14.475459   18189 api_server.go:279] https://192.168.39.217:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0812 10:32:14.475479   18189 api_server.go:103] status: https://192.168.39.217:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0812 10:32:14.964148   18189 api_server.go:253] Checking apiserver healthz at https://192.168.39.217:8441/healthz ...
	I0812 10:32:14.970257   18189 api_server.go:279] https://192.168.39.217:8441/healthz returned 200:
	ok
	I0812 10:32:14.979747   18189 api_server.go:141] control plane version: v1.30.3
	I0812 10:32:14.979779   18189 api_server.go:131] duration metric: took 5.016435807s to wait for apiserver health ...
	I0812 10:32:14.979859   18189 cni.go:84] Creating CNI manager for ""
	I0812 10:32:14.979892   18189 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0812 10:32:14.982061   18189 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0812 10:32:14.984231   18189 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0812 10:32:14.997179   18189 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0812 10:32:15.026630   18189 system_pods.go:43] waiting for kube-system pods to appear ...
	I0812 10:32:15.039873   18189 system_pods.go:59] 7 kube-system pods found
	I0812 10:32:15.039899   18189 system_pods.go:61] "coredns-7db6d8ff4d-kvjbq" [814304ec-5e53-4f37-8785-64c6add328d3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0812 10:32:15.039907   18189 system_pods.go:61] "etcd-functional-470148" [3eb734ff-85c0-4aca-a917-d5cd68427a9a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0812 10:32:15.039922   18189 system_pods.go:61] "kube-apiserver-functional-470148" [c5774f60-aeeb-42e8-b996-40a18d4353a5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0812 10:32:15.039930   18189 system_pods.go:61] "kube-controller-manager-functional-470148" [79b2728d-65d6-470e-bf89-6f82897b90f2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0812 10:32:15.039936   18189 system_pods.go:61] "kube-proxy-xmv5n" [33ebde81-959a-4d85-a89b-b99521c05eff] Running
	I0812 10:32:15.039943   18189 system_pods.go:61] "kube-scheduler-functional-470148" [1158c6b5-7c45-4952-aa27-1d27326019ea] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0812 10:32:15.039952   18189 system_pods.go:61] "storage-provisioner" [6401a106-0623-4d76-a310-52113a158364] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0812 10:32:15.039958   18189 system_pods.go:74] duration metric: took 13.314671ms to wait for pod list to return data ...
	I0812 10:32:15.039967   18189 node_conditions.go:102] verifying NodePressure condition ...
	I0812 10:32:15.045779   18189 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0812 10:32:15.045794   18189 node_conditions.go:123] node cpu capacity is 2
	I0812 10:32:15.045804   18189 node_conditions.go:105] duration metric: took 5.833898ms to run NodePressure ...
	I0812 10:32:15.045823   18189 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0812 10:32:15.448095   18189 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0812 10:32:15.466840   18189 kubeadm.go:739] kubelet initialised
	I0812 10:32:15.466853   18189 kubeadm.go:740] duration metric: took 18.734289ms waiting for restarted kubelet to initialise ...
	I0812 10:32:15.466861   18189 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 10:32:15.476331   18189 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-kvjbq" in "kube-system" namespace to be "Ready" ...
	I0812 10:32:17.484174   18189 pod_ready.go:102] pod "coredns-7db6d8ff4d-kvjbq" in "kube-system" namespace has status "Ready":"False"
	I0812 10:32:17.983220   18189 pod_ready.go:92] pod "coredns-7db6d8ff4d-kvjbq" in "kube-system" namespace has status "Ready":"True"
	I0812 10:32:17.983234   18189 pod_ready.go:81] duration metric: took 2.506888368s for pod "coredns-7db6d8ff4d-kvjbq" in "kube-system" namespace to be "Ready" ...
	I0812 10:32:17.983245   18189 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-470148" in "kube-system" namespace to be "Ready" ...
	I0812 10:32:19.990519   18189 pod_ready.go:102] pod "etcd-functional-470148" in "kube-system" namespace has status "Ready":"False"
	I0812 10:32:21.991085   18189 pod_ready.go:102] pod "etcd-functional-470148" in "kube-system" namespace has status "Ready":"False"
	I0812 10:32:22.992540   18189 pod_ready.go:92] pod "etcd-functional-470148" in "kube-system" namespace has status "Ready":"True"
	I0812 10:32:22.992554   18189 pod_ready.go:81] duration metric: took 5.009302185s for pod "etcd-functional-470148" in "kube-system" namespace to be "Ready" ...
	I0812 10:32:22.992565   18189 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-470148" in "kube-system" namespace to be "Ready" ...
	I0812 10:32:24.001002   18189 pod_ready.go:92] pod "kube-apiserver-functional-470148" in "kube-system" namespace has status "Ready":"True"
	I0812 10:32:24.001020   18189 pod_ready.go:81] duration metric: took 1.008442945s for pod "kube-apiserver-functional-470148" in "kube-system" namespace to be "Ready" ...
	I0812 10:32:24.001029   18189 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-470148" in "kube-system" namespace to be "Ready" ...
	I0812 10:32:26.008279   18189 pod_ready.go:102] pod "kube-controller-manager-functional-470148" in "kube-system" namespace has status "Ready":"False"
	I0812 10:32:27.008234   18189 pod_ready.go:92] pod "kube-controller-manager-functional-470148" in "kube-system" namespace has status "Ready":"True"
	I0812 10:32:27.008246   18189 pod_ready.go:81] duration metric: took 3.007211622s for pod "kube-controller-manager-functional-470148" in "kube-system" namespace to be "Ready" ...
	I0812 10:32:27.008256   18189 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-xmv5n" in "kube-system" namespace to be "Ready" ...
	I0812 10:32:27.013647   18189 pod_ready.go:92] pod "kube-proxy-xmv5n" in "kube-system" namespace has status "Ready":"True"
	I0812 10:32:27.013657   18189 pod_ready.go:81] duration metric: took 5.395908ms for pod "kube-proxy-xmv5n" in "kube-system" namespace to be "Ready" ...
	I0812 10:32:27.013663   18189 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-470148" in "kube-system" namespace to be "Ready" ...
	I0812 10:32:28.515351   18189 pod_ready.go:97] error getting pod "kube-scheduler-functional-470148" in "kube-system" namespace (skipping!): Get "https://192.168.39.217:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-470148": dial tcp 192.168.39.217:8441: connect: connection refused
	I0812 10:32:28.515376   18189 pod_ready.go:81] duration metric: took 1.501705591s for pod "kube-scheduler-functional-470148" in "kube-system" namespace to be "Ready" ...
	E0812 10:32:28.515387   18189 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-scheduler-functional-470148" in "kube-system" namespace (skipping!): Get "https://192.168.39.217:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-470148": dial tcp 192.168.39.217:8441: connect: connection refused
	I0812 10:32:28.515410   18189 pod_ready.go:38] duration metric: took 13.048540046s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 10:32:28.515429   18189 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0812 10:32:28.533733   18189 ops.go:34] apiserver oom_adj: -16
	I0812 10:32:28.533746   18189 kubeadm.go:597] duration metric: took 21.633452504s to restartPrimaryControlPlane
	I0812 10:32:28.533755   18189 kubeadm.go:394] duration metric: took 21.665330355s to StartCluster
	I0812 10:32:28.533772   18189 settings.go:142] acquiring lock: {Name:mkba5c2b975cd0b8bdc203e1abd117d5ce4dcc08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 10:32:28.533857   18189 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19409-3796/kubeconfig
	I0812 10:32:28.534632   18189 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3796/kubeconfig: {Name:mk907d76af9966fcc783a1f0e0b3b2a7c51b6dcd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 10:32:28.534890   18189 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.217 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0812 10:32:28.534954   18189 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0812 10:32:28.535029   18189 addons.go:69] Setting storage-provisioner=true in profile "functional-470148"
	I0812 10:32:28.535053   18189 addons.go:234] Setting addon storage-provisioner=true in "functional-470148"
	W0812 10:32:28.535057   18189 addons.go:243] addon storage-provisioner should already be in state true
	I0812 10:32:28.535048   18189 addons.go:69] Setting default-storageclass=true in profile "functional-470148"
	I0812 10:32:28.535079   18189 host.go:66] Checking if "functional-470148" exists ...
	I0812 10:32:28.535084   18189 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-470148"
	I0812 10:32:28.535117   18189 config.go:182] Loaded profile config "functional-470148": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0812 10:32:28.535398   18189 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0812 10:32:28.535428   18189 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:32:28.535431   18189 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0812 10:32:28.535451   18189 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:32:28.536728   18189 out.go:177] * Verifying Kubernetes components...
	I0812 10:32:28.538138   18189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 10:32:28.551807   18189 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36735
	I0812 10:32:28.551811   18189 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43415
	I0812 10:32:28.552315   18189 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:32:28.552406   18189 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:32:28.552931   18189 main.go:141] libmachine: Using API Version  1
	I0812 10:32:28.552936   18189 main.go:141] libmachine: Using API Version  1
	I0812 10:32:28.552942   18189 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:32:28.552949   18189 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:32:28.553278   18189 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:32:28.553361   18189 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:32:28.553532   18189 main.go:141] libmachine: (functional-470148) Calling .GetState
	I0812 10:32:28.553789   18189 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0812 10:32:28.553821   18189 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:32:28.556264   18189 addons.go:234] Setting addon default-storageclass=true in "functional-470148"
	W0812 10:32:28.556274   18189 addons.go:243] addon default-storageclass should already be in state true
	I0812 10:32:28.556303   18189 host.go:66] Checking if "functional-470148" exists ...
	I0812 10:32:28.556663   18189 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0812 10:32:28.556701   18189 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:32:28.569667   18189 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39217
	I0812 10:32:28.570116   18189 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:32:28.570545   18189 main.go:141] libmachine: Using API Version  1
	I0812 10:32:28.570555   18189 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:32:28.570868   18189 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:32:28.571014   18189 main.go:141] libmachine: (functional-470148) Calling .GetState
	I0812 10:32:28.572704   18189 main.go:141] libmachine: (functional-470148) Calling .DriverName
	I0812 10:32:28.574870   18189 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0812 10:32:28.575056   18189 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46869
	I0812 10:32:28.575475   18189 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:32:28.576017   18189 main.go:141] libmachine: Using API Version  1
	I0812 10:32:28.576033   18189 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:32:28.576154   18189 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0812 10:32:28.576165   18189 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0812 10:32:28.576182   18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHHostname
	I0812 10:32:28.576402   18189 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:32:28.576993   18189 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0812 10:32:28.577021   18189 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:32:28.579246   18189 main.go:141] libmachine: (functional-470148) DBG | domain functional-470148 has defined MAC address 52:54:00:f9:38:04 in network mk-functional-470148
	I0812 10:32:28.579663   18189 main.go:141] libmachine: (functional-470148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:38:04", ip: ""} in network mk-functional-470148: {Iface:virbr1 ExpiryTime:2024-08-12 11:29:27 +0000 UTC Type:0 Mac:52:54:00:f9:38:04 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:functional-470148 Clientid:01:52:54:00:f9:38:04}
	I0812 10:32:28.579678   18189 main.go:141] libmachine: (functional-470148) DBG | domain functional-470148 has defined IP address 192.168.39.217 and MAC address 52:54:00:f9:38:04 in network mk-functional-470148
	I0812 10:32:28.579867   18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHPort
	I0812 10:32:28.579987   18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHKeyPath
	I0812 10:32:28.580084   18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHUsername
	I0812 10:32:28.580151   18189 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3796/.minikube/machines/functional-470148/id_rsa Username:docker}
	I0812 10:32:28.596594   18189 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37857
	I0812 10:32:28.597025   18189 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:32:28.597609   18189 main.go:141] libmachine: Using API Version  1
	I0812 10:32:28.597623   18189 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:32:28.597945   18189 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:32:28.598208   18189 main.go:141] libmachine: (functional-470148) Calling .GetState
	I0812 10:32:28.600163   18189 main.go:141] libmachine: (functional-470148) Calling .DriverName
	I0812 10:32:28.600416   18189 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0812 10:32:28.600426   18189 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0812 10:32:28.600446   18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHHostname
	I0812 10:32:28.603256   18189 main.go:141] libmachine: (functional-470148) DBG | domain functional-470148 has defined MAC address 52:54:00:f9:38:04 in network mk-functional-470148
	I0812 10:32:28.603744   18189 main.go:141] libmachine: (functional-470148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:38:04", ip: ""} in network mk-functional-470148: {Iface:virbr1 ExpiryTime:2024-08-12 11:29:27 +0000 UTC Type:0 Mac:52:54:00:f9:38:04 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:functional-470148 Clientid:01:52:54:00:f9:38:04}
	I0812 10:32:28.603769   18189 main.go:141] libmachine: (functional-470148) DBG | domain functional-470148 has defined IP address 192.168.39.217 and MAC address 52:54:00:f9:38:04 in network mk-functional-470148
	I0812 10:32:28.603990   18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHPort
	I0812 10:32:28.604199   18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHKeyPath
	I0812 10:32:28.604348   18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHUsername
	I0812 10:32:28.604477   18189 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3796/.minikube/machines/functional-470148/id_rsa Username:docker}
	I0812 10:32:28.737151   18189 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0812 10:32:28.754776   18189 node_ready.go:35] waiting up to 6m0s for node "functional-470148" to be "Ready" ...
	I0812 10:32:28.830404   18189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W0812 10:32:28.899372   18189 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0812 10:32:28.899400   18189 retry.go:31] will retry after 267.115997ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0812 10:32:28.933502   18189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0812 10:32:29.008511   18189 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0812 10:32:29.008537   18189 retry.go:31] will retry after 359.254435ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0812 10:32:29.166849   18189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0812 10:32:29.230500   18189 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0812 10:32:29.230522   18189 retry.go:31] will retry after 512.327925ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0812 10:32:29.368659   18189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0812 10:32:29.433351   18189 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0812 10:32:29.433376   18189 retry.go:31] will retry after 335.410572ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0812 10:32:29.743890   18189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0812 10:32:29.769326   18189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0812 10:32:29.825487   18189 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0812 10:32:29.825516   18189 retry.go:31] will retry after 383.088186ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0812 10:32:29.856120   18189 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0812 10:32:29.856150   18189 retry.go:31] will retry after 725.222424ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0812 10:32:30.209632   18189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0812 10:32:30.280949   18189 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0812 10:32:30.280977   18189 retry.go:31] will retry after 1.187875626s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0812 10:32:30.582441   18189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0812 10:32:30.646307   18189 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0812 10:32:30.646342   18189 retry.go:31] will retry after 532.861209ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0812 10:32:30.756121   18189 node_ready.go:53] error getting node "functional-470148": Get "https://192.168.39.217:8441/api/v1/nodes/functional-470148": dial tcp 192.168.39.217:8441: connect: connection refused
	I0812 10:32:31.179647   18189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0812 10:32:31.250739   18189 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0812 10:32:31.250773   18189 retry.go:31] will retry after 899.135469ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0812 10:32:31.469001   18189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0812 10:32:31.538660   18189 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0812 10:32:31.538689   18189 retry.go:31] will retry after 1.408200519s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0812 10:32:32.150259   18189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0812 10:32:32.214257   18189 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0812 10:32:32.214283   18189 retry.go:31] will retry after 1.78359862s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0812 10:32:32.947872   18189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0812 10:32:33.018991   18189 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0812 10:32:33.019017   18189 retry.go:31] will retry after 2.821630245s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0812 10:32:33.256056   18189 node_ready.go:53] error getting node "functional-470148": Get "https://192.168.39.217:8441/api/v1/nodes/functional-470148": dial tcp 192.168.39.217:8441: connect: connection refused
	I0812 10:32:33.998465   18189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0812 10:32:34.075499   18189 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0812 10:32:34.075528   18189 retry.go:31] will retry after 2.344837357s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0812 10:32:35.756340   18189 node_ready.go:53] error getting node "functional-470148": Get "https://192.168.39.217:8441/api/v1/nodes/functional-470148": dial tcp 192.168.39.217:8441: connect: connection refused
	I0812 10:32:35.841574   18189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0812 10:32:35.923563   18189 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0812 10:32:35.923590   18189 retry.go:31] will retry after 1.672401183s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0812 10:32:36.421118   18189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0812 10:32:36.489991   18189 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0812 10:32:36.490012   18189 retry.go:31] will retry after 3.815744723s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0812 10:32:37.596156   18189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0812 10:32:37.667874   18189 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0812 10:32:37.667902   18189 retry.go:31] will retry after 5.828338709s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0812 10:32:38.256017   18189 node_ready.go:53] error getting node "functional-470148": Get "https://192.168.39.217:8441/api/v1/nodes/functional-470148": dial tcp 192.168.39.217:8441: connect: connection refused
	I0812 10:32:40.306181   18189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0812 10:32:40.374664   18189 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0812 10:32:40.374687   18189 retry.go:31] will retry after 3.82366058s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0812 10:32:40.755745   18189 node_ready.go:53] error getting node "functional-470148": Get "https://192.168.39.217:8441/api/v1/nodes/functional-470148": dial tcp 192.168.39.217:8441: connect: connection refused
	I0812 10:32:42.755778   18189 node_ready.go:53] error getting node "functional-470148": Get "https://192.168.39.217:8441/api/v1/nodes/functional-470148": dial tcp 192.168.39.217:8441: connect: connection refused
	I0812 10:32:43.496565   18189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0812 10:32:43.570163   18189 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0812 10:32:43.570191   18189 retry.go:31] will retry after 8.107200931s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0812 10:32:44.198557   18189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0812 10:32:44.261628   18189 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0812 10:32:44.261648   18189 retry.go:31] will retry after 6.162963503s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0812 10:32:44.756248   18189 node_ready.go:53] error getting node "functional-470148": Get "https://192.168.39.217:8441/api/v1/nodes/functional-470148": dial tcp 192.168.39.217:8441: connect: connection refused
	I0812 10:32:47.255547   18189 node_ready.go:53] error getting node "functional-470148": Get "https://192.168.39.217:8441/api/v1/nodes/functional-470148": dial tcp 192.168.39.217:8441: connect: connection refused
	I0812 10:32:49.756380   18189 node_ready.go:53] error getting node "functional-470148": Get "https://192.168.39.217:8441/api/v1/nodes/functional-470148": dial tcp 192.168.39.217:8441: connect: connection refused
	I0812 10:32:50.424944   18189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0812 10:32:50.487049   18189 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0812 10:32:50.487072   18189 retry.go:31] will retry after 8.807074684s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0812 10:32:51.677709   18189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0812 10:32:51.752116   18189 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0812 10:32:51.752139   18189 retry.go:31] will retry after 9.888706894s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0812 10:32:52.256335   18189 node_ready.go:53] error getting node "functional-470148": Get "https://192.168.39.217:8441/api/v1/nodes/functional-470148": dial tcp 192.168.39.217:8441: connect: connection refused
	I0812 10:32:54.756360   18189 node_ready.go:53] error getting node "functional-470148": Get "https://192.168.39.217:8441/api/v1/nodes/functional-470148": dial tcp 192.168.39.217:8441: connect: connection refused
	I0812 10:32:57.255713   18189 node_ready.go:53] error getting node "functional-470148": Get "https://192.168.39.217:8441/api/v1/nodes/functional-470148": dial tcp 192.168.39.217:8441: connect: connection refused
	I0812 10:32:59.256552   18189 node_ready.go:53] error getting node "functional-470148": Get "https://192.168.39.217:8441/api/v1/nodes/functional-470148": dial tcp 192.168.39.217:8441: connect: connection refused
	I0812 10:32:59.294797   18189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0812 10:32:59.364950   18189 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0812 10:32:59.364978   18189 retry.go:31] will retry after 23.085905643s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0812 10:33:01.641184   18189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0812 10:33:01.719045   18189 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0812 10:33:01.719067   18189 retry.go:31] will retry after 17.311771994s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0812 10:33:01.755839   18189 node_ready.go:53] error getting node "functional-470148": Get "https://192.168.39.217:8441/api/v1/nodes/functional-470148": dial tcp 192.168.39.217:8441: connect: connection refused
	I0812 10:33:03.756200   18189 node_ready.go:53] error getting node "functional-470148": Get "https://192.168.39.217:8441/api/v1/nodes/functional-470148": dial tcp 192.168.39.217:8441: connect: connection refused
	I0812 10:33:06.255865   18189 node_ready.go:53] error getting node "functional-470148": Get "https://192.168.39.217:8441/api/v1/nodes/functional-470148": dial tcp 192.168.39.217:8441: connect: connection refused
	I0812 10:33:08.256480   18189 node_ready.go:53] error getting node "functional-470148": Get "https://192.168.39.217:8441/api/v1/nodes/functional-470148": dial tcp 192.168.39.217:8441: connect: connection refused
	I0812 10:33:10.756292   18189 node_ready.go:53] error getting node "functional-470148": Get "https://192.168.39.217:8441/api/v1/nodes/functional-470148": dial tcp 192.168.39.217:8441: connect: connection refused
	I0812 10:33:12.513770   18189 node_ready.go:49] node "functional-470148" has status "Ready":"True"
	I0812 10:33:12.513782   18189 node_ready.go:38] duration metric: took 43.758987086s for node "functional-470148" to be "Ready" ...
	I0812 10:33:12.513791   18189 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 10:33:12.564650   18189 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-kvjbq" in "kube-system" namespace to be "Ready" ...
	I0812 10:33:12.637491   18189 pod_ready.go:97] node "functional-470148" hosting pod "coredns-7db6d8ff4d-kvjbq" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-470148" has status "Ready":"Unknown"
	I0812 10:33:12.637505   18189 pod_ready.go:81] duration metric: took 72.843248ms for pod "coredns-7db6d8ff4d-kvjbq" in "kube-system" namespace to be "Ready" ...
	E0812 10:33:12.637513   18189 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-470148" hosting pod "coredns-7db6d8ff4d-kvjbq" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-470148" has status "Ready":"Unknown"
	I0812 10:33:12.637531   18189 pod_ready.go:78] waiting up to 6m0s for pod "etcd-functional-470148" in "kube-system" namespace to be "Ready" ...
	I0812 10:33:12.667977   18189 pod_ready.go:97] node "functional-470148" hosting pod "etcd-functional-470148" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-470148" has status "Ready":"Unknown"
	I0812 10:33:12.667993   18189 pod_ready.go:81] duration metric: took 30.455975ms for pod "etcd-functional-470148" in "kube-system" namespace to be "Ready" ...
	E0812 10:33:12.668001   18189 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-470148" hosting pod "etcd-functional-470148" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-470148" has status "Ready":"Unknown"
	I0812 10:33:12.668021   18189 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-functional-470148" in "kube-system" namespace to be "Ready" ...
	I0812 10:33:12.679883   18189 pod_ready.go:97] error getting pod "kube-apiserver-functional-470148" in "kube-system" namespace (skipping!): pods "kube-apiserver-functional-470148" not found
	I0812 10:33:12.679898   18189 pod_ready.go:81] duration metric: took 11.870412ms for pod "kube-apiserver-functional-470148" in "kube-system" namespace to be "Ready" ...
	E0812 10:33:12.679907   18189 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-apiserver-functional-470148" in "kube-system" namespace (skipping!): pods "kube-apiserver-functional-470148" not found
	I0812 10:33:12.679924   18189 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-functional-470148" in "kube-system" namespace to be "Ready" ...
	I0812 10:33:12.705508   18189 pod_ready.go:97] node "functional-470148" hosting pod "kube-controller-manager-functional-470148" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-470148" has status "Ready":"Unknown"
	I0812 10:33:12.705521   18189 pod_ready.go:81] duration metric: took 25.591905ms for pod "kube-controller-manager-functional-470148" in "kube-system" namespace to be "Ready" ...
	E0812 10:33:12.705530   18189 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-470148" hosting pod "kube-controller-manager-functional-470148" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-470148" has status "Ready":"Unknown"
	I0812 10:33:12.705546   18189 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xmv5n" in "kube-system" namespace to be "Ready" ...
	I0812 10:33:12.712546   18189 pod_ready.go:97] node "functional-470148" hosting pod "kube-proxy-xmv5n" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-470148" has status "Ready":"Unknown"
	I0812 10:33:12.712559   18189 pod_ready.go:81] duration metric: took 7.005502ms for pod "kube-proxy-xmv5n" in "kube-system" namespace to be "Ready" ...
	E0812 10:33:12.712569   18189 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-470148" hosting pod "kube-proxy-xmv5n" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-470148" has status "Ready":"Unknown"
	I0812 10:33:12.712586   18189 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-functional-470148" in "kube-system" namespace to be "Ready" ...
	I0812 10:33:12.918717   18189 pod_ready.go:97] node "functional-470148" hosting pod "kube-scheduler-functional-470148" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-470148" has status "Ready":"Unknown"
	I0812 10:33:12.918729   18189 pod_ready.go:81] duration metric: took 206.138469ms for pod "kube-scheduler-functional-470148" in "kube-system" namespace to be "Ready" ...
	E0812 10:33:12.918737   18189 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-470148" hosting pod "kube-scheduler-functional-470148" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-470148" has status "Ready":"Unknown"
	I0812 10:33:12.918754   18189 pod_ready.go:38] duration metric: took 404.955962ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 10:33:12.918774   18189 api_server.go:52] waiting for apiserver process to appear ...
	I0812 10:33:12.918822   18189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 10:33:12.936043   18189 api_server.go:72] duration metric: took 44.401129274s to wait for apiserver process to appear ...
	I0812 10:33:12.936060   18189 api_server.go:88] waiting for apiserver healthz status ...
	I0812 10:33:12.936076   18189 api_server.go:253] Checking apiserver healthz at https://192.168.39.217:8441/healthz ...
	I0812 10:33:12.943055   18189 api_server.go:279] https://192.168.39.217:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0812 10:33:12.943077   18189 api_server.go:103] status: https://192.168.39.217:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0812 10:33:13.436999   18189 api_server.go:253] Checking apiserver healthz at https://192.168.39.217:8441/healthz ...
	I0812 10:33:13.443284   18189 api_server.go:279] https://192.168.39.217:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0812 10:33:13.443299   18189 api_server.go:103] status: https://192.168.39.217:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0812 10:33:13.936279   18189 api_server.go:253] Checking apiserver healthz at https://192.168.39.217:8441/healthz ...
	I0812 10:33:13.940771   18189 api_server.go:279] https://192.168.39.217:8441/healthz returned 200:
	ok
	I0812 10:33:13.941825   18189 api_server.go:141] control plane version: v1.30.3
	I0812 10:33:13.941841   18189 api_server.go:131] duration metric: took 1.005776109s to wait for apiserver health ...
	I0812 10:33:13.941847   18189 system_pods.go:43] waiting for kube-system pods to appear ...
	I0812 10:33:13.947654   18189 system_pods.go:59] 7 kube-system pods found
	I0812 10:33:13.947672   18189 system_pods.go:61] "coredns-7db6d8ff4d-kvjbq" [814304ec-5e53-4f37-8785-64c6add328d3] Running
	I0812 10:33:13.947677   18189 system_pods.go:61] "etcd-functional-470148" [3eb734ff-85c0-4aca-a917-d5cd68427a9a] Running
	I0812 10:33:13.947682   18189 system_pods.go:61] "kube-apiserver-functional-470148" [8366a459-a799-48ee-a137-2a3b7cab1261] Pending
	I0812 10:33:13.947686   18189 system_pods.go:61] "kube-controller-manager-functional-470148" [79b2728d-65d6-470e-bf89-6f82897b90f2] Running
	I0812 10:33:13.947690   18189 system_pods.go:61] "kube-proxy-xmv5n" [33ebde81-959a-4d85-a89b-b99521c05eff] Running
	I0812 10:33:13.947693   18189 system_pods.go:61] "kube-scheduler-functional-470148" [1158c6b5-7c45-4952-aa27-1d27326019ea] Running
	I0812 10:33:13.947696   18189 system_pods.go:61] "storage-provisioner" [6401a106-0623-4d76-a310-52113a158364] Running
	I0812 10:33:13.947703   18189 system_pods.go:74] duration metric: took 5.850075ms to wait for pod list to return data ...
	I0812 10:33:13.947710   18189 default_sa.go:34] waiting for default service account to be created ...
	I0812 10:33:13.950653   18189 default_sa.go:45] found service account: "default"
	I0812 10:33:13.950664   18189 default_sa.go:55] duration metric: took 2.9492ms for default service account to be created ...
	I0812 10:33:13.950672   18189 system_pods.go:116] waiting for k8s-apps to be running ...
	I0812 10:33:13.956236   18189 system_pods.go:86] 7 kube-system pods found
	I0812 10:33:13.956253   18189 system_pods.go:89] "coredns-7db6d8ff4d-kvjbq" [814304ec-5e53-4f37-8785-64c6add328d3] Running
	I0812 10:33:13.956260   18189 system_pods.go:89] "etcd-functional-470148" [3eb734ff-85c0-4aca-a917-d5cd68427a9a] Running
	I0812 10:33:13.956265   18189 system_pods.go:89] "kube-apiserver-functional-470148" [8366a459-a799-48ee-a137-2a3b7cab1261] Pending
	I0812 10:33:13.956271   18189 system_pods.go:89] "kube-controller-manager-functional-470148" [79b2728d-65d6-470e-bf89-6f82897b90f2] Running
	I0812 10:33:13.956276   18189 system_pods.go:89] "kube-proxy-xmv5n" [33ebde81-959a-4d85-a89b-b99521c05eff] Running
	I0812 10:33:13.956280   18189 system_pods.go:89] "kube-scheduler-functional-470148" [1158c6b5-7c45-4952-aa27-1d27326019ea] Running
	I0812 10:33:13.956283   18189 system_pods.go:89] "storage-provisioner" [6401a106-0623-4d76-a310-52113a158364] Running
	I0812 10:33:13.956297   18189 retry.go:31] will retry after 235.617264ms: missing components: kube-apiserver
	I0812 10:33:14.198800   18189 system_pods.go:86] 7 kube-system pods found
	I0812 10:33:14.198815   18189 system_pods.go:89] "coredns-7db6d8ff4d-kvjbq" [814304ec-5e53-4f37-8785-64c6add328d3] Running
	I0812 10:33:14.198819   18189 system_pods.go:89] "etcd-functional-470148" [3eb734ff-85c0-4aca-a917-d5cd68427a9a] Running
	I0812 10:33:14.198823   18189 system_pods.go:89] "kube-apiserver-functional-470148" [8366a459-a799-48ee-a137-2a3b7cab1261] Pending
	I0812 10:33:14.198826   18189 system_pods.go:89] "kube-controller-manager-functional-470148" [79b2728d-65d6-470e-bf89-6f82897b90f2] Running
	I0812 10:33:14.198828   18189 system_pods.go:89] "kube-proxy-xmv5n" [33ebde81-959a-4d85-a89b-b99521c05eff] Running
	I0812 10:33:14.198832   18189 system_pods.go:89] "kube-scheduler-functional-470148" [1158c6b5-7c45-4952-aa27-1d27326019ea] Running
	I0812 10:33:14.198835   18189 system_pods.go:89] "storage-provisioner" [6401a106-0623-4d76-a310-52113a158364] Running
	I0812 10:33:14.198848   18189 retry.go:31] will retry after 273.302224ms: missing components: kube-apiserver
	I0812 10:33:14.479226   18189 system_pods.go:86] 7 kube-system pods found
	I0812 10:33:14.479241   18189 system_pods.go:89] "coredns-7db6d8ff4d-kvjbq" [814304ec-5e53-4f37-8785-64c6add328d3] Running
	I0812 10:33:14.479245   18189 system_pods.go:89] "etcd-functional-470148" [3eb734ff-85c0-4aca-a917-d5cd68427a9a] Running
	I0812 10:33:14.479249   18189 system_pods.go:89] "kube-apiserver-functional-470148" [8366a459-a799-48ee-a137-2a3b7cab1261] Pending
	I0812 10:33:14.479252   18189 system_pods.go:89] "kube-controller-manager-functional-470148" [79b2728d-65d6-470e-bf89-6f82897b90f2] Running
	I0812 10:33:14.479255   18189 system_pods.go:89] "kube-proxy-xmv5n" [33ebde81-959a-4d85-a89b-b99521c05eff] Running
	I0812 10:33:14.479258   18189 system_pods.go:89] "kube-scheduler-functional-470148" [1158c6b5-7c45-4952-aa27-1d27326019ea] Running
	I0812 10:33:14.479261   18189 system_pods.go:89] "storage-provisioner" [6401a106-0623-4d76-a310-52113a158364] Running
	I0812 10:33:14.479274   18189 retry.go:31] will retry after 340.582831ms: missing components: kube-apiserver
	I0812 10:33:14.827761   18189 system_pods.go:86] 7 kube-system pods found
	I0812 10:33:14.827781   18189 system_pods.go:89] "coredns-7db6d8ff4d-kvjbq" [814304ec-5e53-4f37-8785-64c6add328d3] Running
	I0812 10:33:14.827787   18189 system_pods.go:89] "etcd-functional-470148" [3eb734ff-85c0-4aca-a917-d5cd68427a9a] Running
	I0812 10:33:14.827793   18189 system_pods.go:89] "kube-apiserver-functional-470148" [8366a459-a799-48ee-a137-2a3b7cab1261] Pending
	I0812 10:33:14.827796   18189 system_pods.go:89] "kube-controller-manager-functional-470148" [79b2728d-65d6-470e-bf89-6f82897b90f2] Running
	I0812 10:33:14.827800   18189 system_pods.go:89] "kube-proxy-xmv5n" [33ebde81-959a-4d85-a89b-b99521c05eff] Running
	I0812 10:33:14.827803   18189 system_pods.go:89] "kube-scheduler-functional-470148" [1158c6b5-7c45-4952-aa27-1d27326019ea] Running
	I0812 10:33:14.827807   18189 system_pods.go:89] "storage-provisioner" [6401a106-0623-4d76-a310-52113a158364] Running
	I0812 10:33:14.827824   18189 retry.go:31] will retry after 507.416227ms: missing components: kube-apiserver
	I0812 10:33:15.342252   18189 system_pods.go:86] 7 kube-system pods found
	I0812 10:33:15.342266   18189 system_pods.go:89] "coredns-7db6d8ff4d-kvjbq" [814304ec-5e53-4f37-8785-64c6add328d3] Running
	I0812 10:33:15.342272   18189 system_pods.go:89] "etcd-functional-470148" [3eb734ff-85c0-4aca-a917-d5cd68427a9a] Running
	I0812 10:33:15.342275   18189 system_pods.go:89] "kube-apiserver-functional-470148" [8366a459-a799-48ee-a137-2a3b7cab1261] Pending
	I0812 10:33:15.342279   18189 system_pods.go:89] "kube-controller-manager-functional-470148" [79b2728d-65d6-470e-bf89-6f82897b90f2] Running
	I0812 10:33:15.342282   18189 system_pods.go:89] "kube-proxy-xmv5n" [33ebde81-959a-4d85-a89b-b99521c05eff] Running
	I0812 10:33:15.342285   18189 system_pods.go:89] "kube-scheduler-functional-470148" [1158c6b5-7c45-4952-aa27-1d27326019ea] Running
	I0812 10:33:15.342287   18189 system_pods.go:89] "storage-provisioner" [6401a106-0623-4d76-a310-52113a158364] Running
	I0812 10:33:15.342301   18189 retry.go:31] will retry after 711.212653ms: missing components: kube-apiserver
	I0812 10:33:16.060674   18189 system_pods.go:86] 7 kube-system pods found
	I0812 10:33:16.060689   18189 system_pods.go:89] "coredns-7db6d8ff4d-kvjbq" [814304ec-5e53-4f37-8785-64c6add328d3] Running
	I0812 10:33:16.060693   18189 system_pods.go:89] "etcd-functional-470148" [3eb734ff-85c0-4aca-a917-d5cd68427a9a] Running
	I0812 10:33:16.060697   18189 system_pods.go:89] "kube-apiserver-functional-470148" [8366a459-a799-48ee-a137-2a3b7cab1261] Pending
	I0812 10:33:16.060700   18189 system_pods.go:89] "kube-controller-manager-functional-470148" [79b2728d-65d6-470e-bf89-6f82897b90f2] Running
	I0812 10:33:16.060702   18189 system_pods.go:89] "kube-proxy-xmv5n" [33ebde81-959a-4d85-a89b-b99521c05eff] Running
	I0812 10:33:16.060705   18189 system_pods.go:89] "kube-scheduler-functional-470148" [1158c6b5-7c45-4952-aa27-1d27326019ea] Running
	I0812 10:33:16.060708   18189 system_pods.go:89] "storage-provisioner" [6401a106-0623-4d76-a310-52113a158364] Running
	I0812 10:33:16.060721   18189 retry.go:31] will retry after 895.133355ms: missing components: kube-apiserver
	I0812 10:33:16.962316   18189 system_pods.go:86] 7 kube-system pods found
	I0812 10:33:16.962331   18189 system_pods.go:89] "coredns-7db6d8ff4d-kvjbq" [814304ec-5e53-4f37-8785-64c6add328d3] Running
	I0812 10:33:16.962336   18189 system_pods.go:89] "etcd-functional-470148" [3eb734ff-85c0-4aca-a917-d5cd68427a9a] Running
	I0812 10:33:16.962339   18189 system_pods.go:89] "kube-apiserver-functional-470148" [8366a459-a799-48ee-a137-2a3b7cab1261] Pending
	I0812 10:33:16.962343   18189 system_pods.go:89] "kube-controller-manager-functional-470148" [79b2728d-65d6-470e-bf89-6f82897b90f2] Running
	I0812 10:33:16.962347   18189 system_pods.go:89] "kube-proxy-xmv5n" [33ebde81-959a-4d85-a89b-b99521c05eff] Running
	I0812 10:33:16.962350   18189 system_pods.go:89] "kube-scheduler-functional-470148" [1158c6b5-7c45-4952-aa27-1d27326019ea] Running
	I0812 10:33:16.962352   18189 system_pods.go:89] "storage-provisioner" [6401a106-0623-4d76-a310-52113a158364] Running
	I0812 10:33:16.962366   18189 retry.go:31] will retry after 1.177307444s: missing components: kube-apiserver
	I0812 10:33:18.146824   18189 system_pods.go:86] 7 kube-system pods found
	I0812 10:33:18.146839   18189 system_pods.go:89] "coredns-7db6d8ff4d-kvjbq" [814304ec-5e53-4f37-8785-64c6add328d3] Running
	I0812 10:33:18.146844   18189 system_pods.go:89] "etcd-functional-470148" [3eb734ff-85c0-4aca-a917-d5cd68427a9a] Running
	I0812 10:33:18.146847   18189 system_pods.go:89] "kube-apiserver-functional-470148" [8366a459-a799-48ee-a137-2a3b7cab1261] Pending
	I0812 10:33:18.146850   18189 system_pods.go:89] "kube-controller-manager-functional-470148" [79b2728d-65d6-470e-bf89-6f82897b90f2] Running
	I0812 10:33:18.146853   18189 system_pods.go:89] "kube-proxy-xmv5n" [33ebde81-959a-4d85-a89b-b99521c05eff] Running
	I0812 10:33:18.146856   18189 system_pods.go:89] "kube-scheduler-functional-470148" [1158c6b5-7c45-4952-aa27-1d27326019ea] Running
	I0812 10:33:18.146860   18189 system_pods.go:89] "storage-provisioner" [6401a106-0623-4d76-a310-52113a158364] Running
	I0812 10:33:18.146875   18189 retry.go:31] will retry after 1.125579278s: missing components: kube-apiserver
	I0812 10:33:19.031928   18189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0812 10:33:19.161615   18189 main.go:141] libmachine: Making call to close driver server
	I0812 10:33:19.161627   18189 main.go:141] libmachine: (functional-470148) Calling .Close
	I0812 10:33:19.161946   18189 main.go:141] libmachine: Successfully made call to close driver server
	I0812 10:33:19.161956   18189 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 10:33:19.161966   18189 main.go:141] libmachine: Making call to close driver server
	I0812 10:33:19.161974   18189 main.go:141] libmachine: (functional-470148) Calling .Close
	I0812 10:33:19.162223   18189 main.go:141] libmachine: Successfully made call to close driver server
	I0812 10:33:19.162235   18189 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 10:33:19.168478   18189 main.go:141] libmachine: Making call to close driver server
	I0812 10:33:19.168486   18189 main.go:141] libmachine: (functional-470148) Calling .Close
	I0812 10:33:19.168737   18189 main.go:141] libmachine: Successfully made call to close driver server
	I0812 10:33:19.168748   18189 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 10:33:19.281407   18189 system_pods.go:86] 7 kube-system pods found
	I0812 10:33:19.281422   18189 system_pods.go:89] "coredns-7db6d8ff4d-kvjbq" [814304ec-5e53-4f37-8785-64c6add328d3] Running
	I0812 10:33:19.281426   18189 system_pods.go:89] "etcd-functional-470148" [3eb734ff-85c0-4aca-a917-d5cd68427a9a] Running
	I0812 10:33:19.281430   18189 system_pods.go:89] "kube-apiserver-functional-470148" [8366a459-a799-48ee-a137-2a3b7cab1261] Pending
	I0812 10:33:19.281433   18189 system_pods.go:89] "kube-controller-manager-functional-470148" [79b2728d-65d6-470e-bf89-6f82897b90f2] Running
	I0812 10:33:19.281435   18189 system_pods.go:89] "kube-proxy-xmv5n" [33ebde81-959a-4d85-a89b-b99521c05eff] Running
	I0812 10:33:19.281438   18189 system_pods.go:89] "kube-scheduler-functional-470148" [1158c6b5-7c45-4952-aa27-1d27326019ea] Running
	I0812 10:33:19.281440   18189 system_pods.go:89] "storage-provisioner" [6401a106-0623-4d76-a310-52113a158364] Running
	I0812 10:33:19.281455   18189 retry.go:31] will retry after 1.594907103s: missing components: kube-apiserver
	I0812 10:33:20.883982   18189 system_pods.go:86] 7 kube-system pods found
	I0812 10:33:20.883997   18189 system_pods.go:89] "coredns-7db6d8ff4d-kvjbq" [814304ec-5e53-4f37-8785-64c6add328d3] Running
	I0812 10:33:20.884000   18189 system_pods.go:89] "etcd-functional-470148" [3eb734ff-85c0-4aca-a917-d5cd68427a9a] Running
	I0812 10:33:20.884003   18189 system_pods.go:89] "kube-apiserver-functional-470148" [8366a459-a799-48ee-a137-2a3b7cab1261] Pending
	I0812 10:33:20.884006   18189 system_pods.go:89] "kube-controller-manager-functional-470148" [79b2728d-65d6-470e-bf89-6f82897b90f2] Running
	I0812 10:33:20.884009   18189 system_pods.go:89] "kube-proxy-xmv5n" [33ebde81-959a-4d85-a89b-b99521c05eff] Running
	I0812 10:33:20.884012   18189 system_pods.go:89] "kube-scheduler-functional-470148" [1158c6b5-7c45-4952-aa27-1d27326019ea] Running
	I0812 10:33:20.884014   18189 system_pods.go:89] "storage-provisioner" [6401a106-0623-4d76-a310-52113a158364] Running
	I0812 10:33:20.884027   18189 retry.go:31] will retry after 1.709429198s: missing components: kube-apiserver
	I0812 10:33:22.452284   18189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0812 10:33:22.600487   18189 system_pods.go:86] 7 kube-system pods found
	I0812 10:33:22.600508   18189 system_pods.go:89] "coredns-7db6d8ff4d-kvjbq" [814304ec-5e53-4f37-8785-64c6add328d3] Running
	I0812 10:33:22.600515   18189 system_pods.go:89] "etcd-functional-470148" [3eb734ff-85c0-4aca-a917-d5cd68427a9a] Running
	I0812 10:33:22.600521   18189 system_pods.go:89] "kube-apiserver-functional-470148" [8366a459-a799-48ee-a137-2a3b7cab1261] Pending
	I0812 10:33:22.600525   18189 system_pods.go:89] "kube-controller-manager-functional-470148" [79b2728d-65d6-470e-bf89-6f82897b90f2] Running
	I0812 10:33:22.600530   18189 system_pods.go:89] "kube-proxy-xmv5n" [33ebde81-959a-4d85-a89b-b99521c05eff] Running
	I0812 10:33:22.600535   18189 system_pods.go:89] "kube-scheduler-functional-470148" [1158c6b5-7c45-4952-aa27-1d27326019ea] Running
	I0812 10:33:22.600539   18189 system_pods.go:89] "storage-provisioner" [6401a106-0623-4d76-a310-52113a158364] Running
	I0812 10:33:22.600557   18189 retry.go:31] will retry after 2.50460952s: missing components: kube-apiserver
	I0812 10:33:23.046599   18189 main.go:141] libmachine: Making call to close driver server
	I0812 10:33:23.046614   18189 main.go:141] libmachine: (functional-470148) Calling .Close
	I0812 10:33:23.046926   18189 main.go:141] libmachine: (functional-470148) DBG | Closing plugin on server side
	I0812 10:33:23.046958   18189 main.go:141] libmachine: Successfully made call to close driver server
	I0812 10:33:23.046985   18189 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 10:33:23.046994   18189 main.go:141] libmachine: Making call to close driver server
	I0812 10:33:23.047001   18189 main.go:141] libmachine: (functional-470148) Calling .Close
	I0812 10:33:23.047245   18189 main.go:141] libmachine: Successfully made call to close driver server
	I0812 10:33:23.047254   18189 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 10:33:23.049104   18189 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0812 10:33:23.050236   18189 addons.go:510] duration metric: took 54.515282213s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0812 10:33:25.113823   18189 system_pods.go:86] 7 kube-system pods found
	I0812 10:33:25.113838   18189 system_pods.go:89] "coredns-7db6d8ff4d-kvjbq" [814304ec-5e53-4f37-8785-64c6add328d3] Running
	I0812 10:33:25.113842   18189 system_pods.go:89] "etcd-functional-470148" [3eb734ff-85c0-4aca-a917-d5cd68427a9a] Running
	I0812 10:33:25.113845   18189 system_pods.go:89] "kube-apiserver-functional-470148" [8366a459-a799-48ee-a137-2a3b7cab1261] Pending
	I0812 10:33:25.113848   18189 system_pods.go:89] "kube-controller-manager-functional-470148" [79b2728d-65d6-470e-bf89-6f82897b90f2] Running
	I0812 10:33:25.113851   18189 system_pods.go:89] "kube-proxy-xmv5n" [33ebde81-959a-4d85-a89b-b99521c05eff] Running
	I0812 10:33:25.113854   18189 system_pods.go:89] "kube-scheduler-functional-470148" [1158c6b5-7c45-4952-aa27-1d27326019ea] Running
	I0812 10:33:25.113856   18189 system_pods.go:89] "storage-provisioner" [6401a106-0623-4d76-a310-52113a158364] Running
	I0812 10:33:25.113869   18189 retry.go:31] will retry after 2.390372657s: missing components: kube-apiserver
	I0812 10:33:27.510907   18189 system_pods.go:86] 7 kube-system pods found
	I0812 10:33:27.510921   18189 system_pods.go:89] "coredns-7db6d8ff4d-kvjbq" [814304ec-5e53-4f37-8785-64c6add328d3] Running
	I0812 10:33:27.510925   18189 system_pods.go:89] "etcd-functional-470148" [3eb734ff-85c0-4aca-a917-d5cd68427a9a] Running
	I0812 10:33:27.510929   18189 system_pods.go:89] "kube-apiserver-functional-470148" [8366a459-a799-48ee-a137-2a3b7cab1261] Pending
	I0812 10:33:27.510932   18189 system_pods.go:89] "kube-controller-manager-functional-470148" [79b2728d-65d6-470e-bf89-6f82897b90f2] Running
	I0812 10:33:27.510934   18189 system_pods.go:89] "kube-proxy-xmv5n" [33ebde81-959a-4d85-a89b-b99521c05eff] Running
	I0812 10:33:27.510937   18189 system_pods.go:89] "kube-scheduler-functional-470148" [1158c6b5-7c45-4952-aa27-1d27326019ea] Running
	I0812 10:33:27.510940   18189 system_pods.go:89] "storage-provisioner" [6401a106-0623-4d76-a310-52113a158364] Running
	I0812 10:33:27.510952   18189 retry.go:31] will retry after 2.84289009s: missing components: kube-apiserver
	I0812 10:33:30.360322   18189 system_pods.go:86] 7 kube-system pods found
	I0812 10:33:30.360336   18189 system_pods.go:89] "coredns-7db6d8ff4d-kvjbq" [814304ec-5e53-4f37-8785-64c6add328d3] Running
	I0812 10:33:30.360344   18189 system_pods.go:89] "etcd-functional-470148" [3eb734ff-85c0-4aca-a917-d5cd68427a9a] Running
	I0812 10:33:30.360347   18189 system_pods.go:89] "kube-apiserver-functional-470148" [8366a459-a799-48ee-a137-2a3b7cab1261] Pending
	I0812 10:33:30.360350   18189 system_pods.go:89] "kube-controller-manager-functional-470148" [79b2728d-65d6-470e-bf89-6f82897b90f2] Running
	I0812 10:33:30.360353   18189 system_pods.go:89] "kube-proxy-xmv5n" [33ebde81-959a-4d85-a89b-b99521c05eff] Running
	I0812 10:33:30.360356   18189 system_pods.go:89] "kube-scheduler-functional-470148" [1158c6b5-7c45-4952-aa27-1d27326019ea] Running
	I0812 10:33:30.360359   18189 system_pods.go:89] "storage-provisioner" [6401a106-0623-4d76-a310-52113a158364] Running
	I0812 10:33:30.360374   18189 retry.go:31] will retry after 3.975491794s: missing components: kube-apiserver
	I0812 10:33:34.342461   18189 system_pods.go:86] 7 kube-system pods found
	I0812 10:33:34.342477   18189 system_pods.go:89] "coredns-7db6d8ff4d-kvjbq" [814304ec-5e53-4f37-8785-64c6add328d3] Running
	I0812 10:33:34.342480   18189 system_pods.go:89] "etcd-functional-470148" [3eb734ff-85c0-4aca-a917-d5cd68427a9a] Running
	I0812 10:33:34.342486   18189 system_pods.go:89] "kube-apiserver-functional-470148" [8366a459-a799-48ee-a137-2a3b7cab1261] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0812 10:33:34.342490   18189 system_pods.go:89] "kube-controller-manager-functional-470148" [79b2728d-65d6-470e-bf89-6f82897b90f2] Running
	I0812 10:33:34.342496   18189 system_pods.go:89] "kube-proxy-xmv5n" [33ebde81-959a-4d85-a89b-b99521c05eff] Running
	I0812 10:33:34.342499   18189 system_pods.go:89] "kube-scheduler-functional-470148" [1158c6b5-7c45-4952-aa27-1d27326019ea] Running
	I0812 10:33:34.342502   18189 system_pods.go:89] "storage-provisioner" [6401a106-0623-4d76-a310-52113a158364] Running
	I0812 10:33:34.342508   18189 system_pods.go:126] duration metric: took 20.391831948s to wait for k8s-apps to be running ...
	I0812 10:33:34.342514   18189 system_svc.go:44] waiting for kubelet service to be running ....
	I0812 10:33:34.342560   18189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 10:33:34.358918   18189 system_svc.go:56] duration metric: took 16.390216ms WaitForService to wait for kubelet
	I0812 10:33:34.358942   18189 kubeadm.go:582] duration metric: took 1m5.824030436s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 10:33:34.358965   18189 node_conditions.go:102] verifying NodePressure condition ...
	I0812 10:33:34.363878   18189 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0812 10:33:34.363891   18189 node_conditions.go:123] node cpu capacity is 2
	I0812 10:33:34.363901   18189 node_conditions.go:105] duration metric: took 4.932646ms to run NodePressure ...
	I0812 10:33:34.363912   18189 start.go:241] waiting for startup goroutines ...
	I0812 10:33:34.363918   18189 start.go:246] waiting for cluster config update ...
	I0812 10:33:34.363927   18189 start.go:255] writing updated cluster config ...
	I0812 10:33:34.364208   18189 ssh_runner.go:195] Run: rm -f paused
	I0812 10:33:34.414649   18189 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0812 10:33:34.416576   18189 out.go:177] * Done! kubectl is now configured to use "functional-470148" cluster and "default" namespace by default
	
	
	==> Docker <==
	Aug 12 10:32:14 functional-470148 dockerd[6831]: time="2024-08-12T10:32:14.088857232Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 12 10:32:14 functional-470148 dockerd[6831]: time="2024-08-12T10:32:14.088874275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 12 10:32:14 functional-470148 dockerd[6831]: time="2024-08-12T10:32:14.089079541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 12 10:32:14 functional-470148 dockerd[6831]: time="2024-08-12T10:32:14.359731114Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 12 10:32:14 functional-470148 dockerd[6831]: time="2024-08-12T10:32:14.359817010Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 12 10:32:14 functional-470148 dockerd[6831]: time="2024-08-12T10:32:14.359832634Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 12 10:32:14 functional-470148 dockerd[6831]: time="2024-08-12T10:32:14.361076153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 12 10:32:58 functional-470148 dockerd[6824]: time="2024-08-12T10:32:58.276220310Z" level=info msg="Container failed to exit within 30s of signal 15 - using the force" container=f506135f8c8d6425b1299c699f3cf8b56d00bdc1826587feff07686f9ad07b73
	Aug 12 10:32:58 functional-470148 dockerd[6824]: time="2024-08-12T10:32:58.359486843Z" level=info msg="ignoring event" container=f506135f8c8d6425b1299c699f3cf8b56d00bdc1826587feff07686f9ad07b73 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 12 10:32:58 functional-470148 dockerd[6831]: time="2024-08-12T10:32:58.361262616Z" level=info msg="shim disconnected" id=f506135f8c8d6425b1299c699f3cf8b56d00bdc1826587feff07686f9ad07b73 namespace=moby
	Aug 12 10:32:58 functional-470148 dockerd[6831]: time="2024-08-12T10:32:58.361386890Z" level=warning msg="cleaning up after shim disconnected" id=f506135f8c8d6425b1299c699f3cf8b56d00bdc1826587feff07686f9ad07b73 namespace=moby
	Aug 12 10:32:58 functional-470148 dockerd[6831]: time="2024-08-12T10:32:58.361403489Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 12 10:32:58 functional-470148 dockerd[6831]: time="2024-08-12T10:32:58.456732962Z" level=info msg="shim disconnected" id=b9ee3e609048a776e4d8a63d2dae98cc445d9d0de63f78a48be2e60a079d89a8 namespace=moby
	Aug 12 10:32:58 functional-470148 dockerd[6824]: time="2024-08-12T10:32:58.457060373Z" level=info msg="ignoring event" container=b9ee3e609048a776e4d8a63d2dae98cc445d9d0de63f78a48be2e60a079d89a8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 12 10:32:58 functional-470148 dockerd[6831]: time="2024-08-12T10:32:58.457221434Z" level=warning msg="cleaning up after shim disconnected" id=b9ee3e609048a776e4d8a63d2dae98cc445d9d0de63f78a48be2e60a079d89a8 namespace=moby
	Aug 12 10:32:58 functional-470148 dockerd[6831]: time="2024-08-12T10:32:58.457285361Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 12 10:33:10 functional-470148 dockerd[6831]: time="2024-08-12T10:33:10.496989562Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 12 10:33:10 functional-470148 dockerd[6831]: time="2024-08-12T10:33:10.497152398Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 12 10:33:10 functional-470148 dockerd[6831]: time="2024-08-12T10:33:10.497166573Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 12 10:33:10 functional-470148 dockerd[6831]: time="2024-08-12T10:33:10.497270738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 12 10:33:10 functional-470148 cri-dockerd[7108]: time="2024-08-12T10:33:10Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a8a55e3be83be0967bb96880a5d5688265c092fc63c11b376e65c13596416aa9/resolv.conf as [nameserver 192.168.122.1]"
	Aug 12 10:33:10 functional-470148 dockerd[6831]: time="2024-08-12T10:33:10.677172402Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 12 10:33:10 functional-470148 dockerd[6831]: time="2024-08-12T10:33:10.677242091Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 12 10:33:10 functional-470148 dockerd[6831]: time="2024-08-12T10:33:10.677253767Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 12 10:33:10 functional-470148 dockerd[6831]: time="2024-08-12T10:33:10.677330068Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	c8647e19fcd0b       1f6d574d502f3       25 seconds ago       Running             kube-apiserver            0                   a8a55e3be83be       kube-apiserver-functional-470148
	5eb2a7794bcb5       cbb01a7bd410d       About a minute ago   Running             coredns                   3                   c97cca48fa507       coredns-7db6d8ff4d-kvjbq
	bf47ec590592c       6e38f40d628db       About a minute ago   Running             storage-provisioner       3                   e3788c8ecde71       storage-provisioner
	c4a5c937b5ec6       55bb025d2cfa5       About a minute ago   Running             kube-proxy                3                   f243e7d27f57e       kube-proxy-xmv5n
	a25c22de2da62       3861cfcd7c04c       About a minute ago   Running             etcd                      3                   23608e8ec34d6       etcd-functional-470148
	b869b0d288ea3       3edc18e7b7672       About a minute ago   Running             kube-scheduler            3                   94eb2244f55b4       kube-scheduler-functional-470148
	b9857f8f48fd9       76932a3b37d7e       About a minute ago   Running             kube-controller-manager   3                   9f980ac6fcab4       kube-controller-manager-functional-470148
	4f0c8adf0dda6       cbb01a7bd410d       2 minutes ago        Exited              coredns                   2                   6847d5bfe08ce       coredns-7db6d8ff4d-kvjbq
	1f1124951798c       6e38f40d628db       2 minutes ago        Exited              storage-provisioner       2                   6cd4ba5fbd18f       storage-provisioner
	16616cb9ce5d7       55bb025d2cfa5       2 minutes ago        Exited              kube-proxy                2                   ba1224227c458       kube-proxy-xmv5n
	7bdc8c688102e       3edc18e7b7672       2 minutes ago        Exited              kube-scheduler            2                   e46ea15b50bcf       kube-scheduler-functional-470148
	a82fb1fec5521       3861cfcd7c04c       2 minutes ago        Exited              etcd                      2                   efdfc20ff005e       etcd-functional-470148
	4360cfb87e380       76932a3b37d7e       2 minutes ago        Exited              kube-controller-manager   2                   d6db8459618c8       kube-controller-manager-functional-470148
	
	
	==> coredns [4f0c8adf0dda] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:59924 - 10980 "HINFO IN 2879316814154866209.1178878182815758768. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.077865028s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [5eb2a7794bcb] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:55256 - 55467 "HINFO IN 5285818657214602478.7816295238262685673. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.034383267s
	
	
	==> describe nodes <==
	Name:               functional-470148
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-470148
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f2a4d2effced1c491a7cae8e84c3938ed24c7a7
	                    minikube.k8s.io/name=functional-470148
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_12T10_30_03_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 12 Aug 2024 10:29:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-470148
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 12 Aug 2024 10:33:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 12 Aug 2024 10:33:14 +0000   Mon, 12 Aug 2024 10:33:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 12 Aug 2024 10:33:14 +0000   Mon, 12 Aug 2024 10:33:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 12 Aug 2024 10:33:14 +0000   Mon, 12 Aug 2024 10:33:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 12 Aug 2024 10:33:14 +0000   Mon, 12 Aug 2024 10:33:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.217
	  Hostname:    functional-470148
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 05a003fadbaa4cf69bef382bbd2ca450
	  System UUID:                05a003fa-dbaa-4cf6-9bef-382bbd2ca450
	  Boot ID:                    5034c9d8-6737-4bcc-8fd3-dcd824db6967
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-kvjbq                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     3m18s
	  kube-system                 etcd-functional-470148                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         3m32s
	  kube-system                 kube-apiserver-functional-470148             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23s
	  kube-system                 kube-controller-manager-functional-470148    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m32s
	  kube-system                 kube-proxy-xmv5n                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m18s
	  kube-system                 kube-scheduler-functional-470148             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m32s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 80s                    kube-proxy       
	  Normal  Starting                 2m8s                   kube-proxy       
	  Normal  Starting                 3m16s                  kube-proxy       
	  Normal  Starting                 3m33s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m32s                  kubelet          Node functional-470148 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m32s                  kubelet          Node functional-470148 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m32s                  kubelet          Node functional-470148 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m32s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m28s                  kubelet          Node functional-470148 status is now: NodeReady
	  Normal  RegisteredNode           3m19s                  node-controller  Node functional-470148 event: Registered Node functional-470148 in Controller
	  Normal  NodeHasNoDiskPressure    2m15s (x8 over 2m15s)  kubelet          Node functional-470148 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m15s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m15s (x8 over 2m15s)  kubelet          Node functional-470148 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m15s (x7 over 2m15s)  kubelet          Node functional-470148 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m15s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           117s                   node-controller  Node functional-470148 event: Registered Node functional-470148 in Controller
	  Normal  Starting                 87s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  87s (x8 over 87s)      kubelet          Node functional-470148 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    87s (x8 over 87s)      kubelet          Node functional-470148 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     87s (x7 over 87s)      kubelet          Node functional-470148 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  87s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           69s                    node-controller  Node functional-470148 event: Registered Node functional-470148 in Controller
	  Normal  NodeNotReady             23s                    node-controller  Node functional-470148 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.148912] systemd-fstab-generator[3936]: Ignoring "noauto" option for root device
	[  +0.178291] systemd-fstab-generator[3951]: Ignoring "noauto" option for root device
	[  +0.719594] systemd-fstab-generator[4151]: Ignoring "noauto" option for root device
	[  +1.224171] kauditd_printk_skb: 179 callbacks suppressed
	[  +2.167579] systemd-fstab-generator[5041]: Ignoring "noauto" option for root device
	[  +5.552671] kauditd_printk_skb: 74 callbacks suppressed
	[ +12.467621] kauditd_printk_skb: 31 callbacks suppressed
	[  +2.320496] systemd-fstab-generator[5933]: Ignoring "noauto" option for root device
	[ +11.245361] systemd-fstab-generator[6369]: Ignoring "noauto" option for root device
	[  +0.108876] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.266839] systemd-fstab-generator[6402]: Ignoring "noauto" option for root device
	[  +0.196707] systemd-fstab-generator[6414]: Ignoring "noauto" option for root device
	[  +0.197869] systemd-fstab-generator[6429]: Ignoring "noauto" option for root device
	[  +5.289117] kauditd_printk_skb: 89 callbacks suppressed
	[Aug12 10:32] systemd-fstab-generator[7057]: Ignoring "noauto" option for root device
	[  +0.140992] systemd-fstab-generator[7069]: Ignoring "noauto" option for root device
	[  +0.139973] systemd-fstab-generator[7081]: Ignoring "noauto" option for root device
	[  +0.163099] systemd-fstab-generator[7096]: Ignoring "noauto" option for root device
	[  +0.571024] systemd-fstab-generator[7266]: Ignoring "noauto" option for root device
	[  +1.784448] systemd-fstab-generator[7388]: Ignoring "noauto" option for root device
	[  +0.082122] kauditd_printk_skb: 137 callbacks suppressed
	[  +5.466512] kauditd_printk_skb: 52 callbacks suppressed
	[ +12.725032] kauditd_printk_skb: 31 callbacks suppressed
	[  +2.282896] systemd-fstab-generator[8427]: Ignoring "noauto" option for root device
	[ +29.763210] kauditd_printk_skb: 16 callbacks suppressed
	
	
	==> etcd [a25c22de2da6] <==
	{"level":"info","ts":"2024-08-12T10:32:10.115241Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-12T10:32:10.115252Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-12T10:32:10.11576Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a09c9983ac28f1fd switched to configuration voters=(11573293933243462141)"}
	{"level":"info","ts":"2024-08-12T10:32:10.118075Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"8f39477865362797","local-member-id":"a09c9983ac28f1fd","added-peer-id":"a09c9983ac28f1fd","added-peer-peer-urls":["https://192.168.39.217:2380"]}
	{"level":"info","ts":"2024-08-12T10:32:10.118368Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"8f39477865362797","local-member-id":"a09c9983ac28f1fd","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-12T10:32:10.119502Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-12T10:32:10.125392Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"a09c9983ac28f1fd","initial-advertise-peer-urls":["https://192.168.39.217:2380"],"listen-peer-urls":["https://192.168.39.217:2380"],"advertise-client-urls":["https://192.168.39.217:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.217:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-12T10:32:10.125503Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-12T10:32:10.119639Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-12T10:32:10.119824Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.217:2380"}
	{"level":"info","ts":"2024-08-12T10:32:10.13205Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.217:2380"}
	{"level":"info","ts":"2024-08-12T10:32:11.645203Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a09c9983ac28f1fd is starting a new election at term 3"}
	{"level":"info","ts":"2024-08-12T10:32:11.645396Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a09c9983ac28f1fd became pre-candidate at term 3"}
	{"level":"info","ts":"2024-08-12T10:32:11.645468Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a09c9983ac28f1fd received MsgPreVoteResp from a09c9983ac28f1fd at term 3"}
	{"level":"info","ts":"2024-08-12T10:32:11.645551Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a09c9983ac28f1fd became candidate at term 4"}
	{"level":"info","ts":"2024-08-12T10:32:11.645621Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a09c9983ac28f1fd received MsgVoteResp from a09c9983ac28f1fd at term 4"}
	{"level":"info","ts":"2024-08-12T10:32:11.645724Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a09c9983ac28f1fd became leader at term 4"}
	{"level":"info","ts":"2024-08-12T10:32:11.645761Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a09c9983ac28f1fd elected leader a09c9983ac28f1fd at term 4"}
	{"level":"info","ts":"2024-08-12T10:32:11.652142Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-12T10:32:11.652163Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"a09c9983ac28f1fd","local-member-attributes":"{Name:functional-470148 ClientURLs:[https://192.168.39.217:2379]}","request-path":"/0/members/a09c9983ac28f1fd/attributes","cluster-id":"8f39477865362797","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-12T10:32:11.652557Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-12T10:32:11.652839Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-12T10:32:11.652925Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-12T10:32:11.654638Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-12T10:32:11.654902Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.217:2379"}
	
	
	==> etcd [a82fb1fec552] <==
	{"level":"info","ts":"2024-08-12T10:31:21.673361Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.217:2380"}
	{"level":"info","ts":"2024-08-12T10:31:23.250506Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a09c9983ac28f1fd is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-12T10:31:23.25141Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a09c9983ac28f1fd became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-12T10:31:23.251633Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a09c9983ac28f1fd received MsgPreVoteResp from a09c9983ac28f1fd at term 2"}
	{"level":"info","ts":"2024-08-12T10:31:23.251721Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a09c9983ac28f1fd became candidate at term 3"}
	{"level":"info","ts":"2024-08-12T10:31:23.251825Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a09c9983ac28f1fd received MsgVoteResp from a09c9983ac28f1fd at term 3"}
	{"level":"info","ts":"2024-08-12T10:31:23.251945Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a09c9983ac28f1fd became leader at term 3"}
	{"level":"info","ts":"2024-08-12T10:31:23.252088Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a09c9983ac28f1fd elected leader a09c9983ac28f1fd at term 3"}
	{"level":"info","ts":"2024-08-12T10:31:23.258886Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-12T10:31:23.25884Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"a09c9983ac28f1fd","local-member-attributes":"{Name:functional-470148 ClientURLs:[https://192.168.39.217:2379]}","request-path":"/0/members/a09c9983ac28f1fd/attributes","cluster-id":"8f39477865362797","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-12T10:31:23.259804Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-12T10:31:23.260299Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-12T10:31:23.260446Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-12T10:31:23.26125Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.217:2379"}
	{"level":"info","ts":"2024-08-12T10:31:23.2624Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-12T10:31:52.477946Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-12T10:31:52.478089Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"functional-470148","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.217:2380"],"advertise-client-urls":["https://192.168.39.217:2379"]}
	{"level":"warn","ts":"2024-08-12T10:31:52.478265Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-12T10:31:52.478384Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-12T10:31:52.517139Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.217:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-12T10:31:52.517322Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.217:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-12T10:31:52.517381Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"a09c9983ac28f1fd","current-leader-member-id":"a09c9983ac28f1fd"}
	{"level":"info","ts":"2024-08-12T10:31:52.520745Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.217:2380"}
	{"level":"info","ts":"2024-08-12T10:31:52.520959Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.217:2380"}
	{"level":"info","ts":"2024-08-12T10:31:52.520985Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"functional-470148","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.217:2380"],"advertise-client-urls":["https://192.168.39.217:2379"]}
	
	
	==> kernel <==
	 10:33:35 up 4 min,  0 users,  load average: 1.23, 0.82, 0.34
	Linux functional-470148 5.10.207 #1 SMP Wed Jul 31 15:10:11 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [c8647e19fcd0] <==
	I0812 10:33:12.448726       1 naming_controller.go:291] Starting NamingConditionController
	I0812 10:33:12.448758       1 establishing_controller.go:76] Starting EstablishingController
	I0812 10:33:12.448886       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0812 10:33:12.448922       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0812 10:33:12.449052       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0812 10:33:12.527824       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0812 10:33:12.529068       1 shared_informer.go:320] Caches are synced for configmaps
	I0812 10:33:12.529445       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0812 10:33:12.531717       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0812 10:33:12.535330       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0812 10:33:12.536943       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0812 10:33:12.536985       1 policy_source.go:224] refreshing policies
	I0812 10:33:12.537071       1 aggregator.go:165] initial CRD sync complete...
	I0812 10:33:12.537098       1 autoregister_controller.go:141] Starting autoregister controller
	I0812 10:33:12.537107       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0812 10:33:12.537112       1 cache.go:39] Caches are synced for autoregister controller
	I0812 10:33:12.581679       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0812 10:33:12.584434       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0812 10:33:12.584466       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0812 10:33:12.585259       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0812 10:33:12.587531       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0812 10:33:13.438412       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0812 10:33:13.722238       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.217]
	I0812 10:33:13.723984       1 controller.go:615] quota admission added evaluator for: endpoints
	I0812 10:33:13.730807       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [4360cfb87e38] <==
	I0812 10:31:37.884957       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0812 10:31:37.908092       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0812 10:31:37.909951       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0812 10:31:37.913150       1 shared_informer.go:320] Caches are synced for daemon sets
	I0812 10:31:37.915919       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0812 10:31:37.916255       1 shared_informer.go:320] Caches are synced for crt configmap
	I0812 10:31:37.920717       1 shared_informer.go:320] Caches are synced for namespace
	I0812 10:31:37.941105       1 shared_informer.go:320] Caches are synced for service account
	I0812 10:31:37.978721       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0812 10:31:38.000787       1 shared_informer.go:320] Caches are synced for endpoint
	I0812 10:31:38.013617       1 shared_informer.go:320] Caches are synced for taint
	I0812 10:31:38.014781       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0812 10:31:38.015182       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-470148"
	I0812 10:31:38.015408       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0812 10:31:38.053047       1 shared_informer.go:320] Caches are synced for expand
	I0812 10:31:38.065554       1 shared_informer.go:320] Caches are synced for persistent volume
	I0812 10:31:38.088413       1 shared_informer.go:320] Caches are synced for ephemeral
	I0812 10:31:38.089813       1 shared_informer.go:320] Caches are synced for attach detach
	I0812 10:31:38.096383       1 shared_informer.go:320] Caches are synced for PVC protection
	I0812 10:31:38.103703       1 shared_informer.go:320] Caches are synced for stateful set
	I0812 10:31:38.122802       1 shared_informer.go:320] Caches are synced for resource quota
	I0812 10:31:38.123142       1 shared_informer.go:320] Caches are synced for resource quota
	I0812 10:31:38.517414       1 shared_informer.go:320] Caches are synced for garbage collector
	I0812 10:31:38.517721       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0812 10:31:38.523977       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-controller-manager [b9857f8f48fd] <==
	I0812 10:32:26.277064       1 shared_informer.go:320] Caches are synced for deployment
	I0812 10:32:26.279713       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0812 10:32:26.287155       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0812 10:32:26.301176       1 shared_informer.go:320] Caches are synced for ephemeral
	I0812 10:32:26.303571       1 shared_informer.go:320] Caches are synced for HPA
	I0812 10:32:26.316214       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0812 10:32:26.370026       1 shared_informer.go:320] Caches are synced for attach detach
	I0812 10:32:26.458666       1 shared_informer.go:320] Caches are synced for resource quota
	I0812 10:32:26.482194       1 shared_informer.go:320] Caches are synced for resource quota
	I0812 10:32:26.483355       1 shared_informer.go:320] Caches are synced for disruption
	I0812 10:32:26.900942       1 shared_informer.go:320] Caches are synced for garbage collector
	I0812 10:32:26.901107       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0812 10:32:26.910649       1 shared_informer.go:320] Caches are synced for garbage collector
	E0812 10:32:56.484343       1 resource_quota_controller.go:440] failed to discover resources: Get "https://192.168.39.217:8441/api": dial tcp 192.168.39.217:8441: connect: connection refused
	I0812 10:32:56.912339       1 garbagecollector.go:828] "failed to discover preferred resources" logger="garbage-collector-controller" error="Get \"https://192.168.39.217:8441/api\": dial tcp 192.168.39.217:8441: connect: connection refused"
	E0812 10:33:06.272291       1 node_lifecycle_controller.go:973] "Error updating node" err="Put \"https://192.168.39.217:8441/api/v1/nodes/functional-470148/status\": dial tcp 192.168.39.217:8441: connect: connection refused" logger="node-lifecycle-controller" node="functional-470148"
	E0812 10:33:06.273225       1 node_lifecycle_controller.go:715] "Failed while getting a Node to retry updating node health. Probably Node was deleted" logger="node-lifecycle-controller" node="functional-470148"
	E0812 10:33:06.273268       1 node_lifecycle_controller.go:720] "Update health of Node from Controller error, Skipping - no pods will be evicted" err="Get \"https://192.168.39.217:8441/api/v1/nodes/functional-470148\": dial tcp 192.168.39.217:8441: connect: connection refused" logger="node-lifecycle-controller" node=""
	I0812 10:33:11.274125       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	E0812 10:33:12.529970       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ControllerRevision: unknown (get controllerrevisions.apps)
	I0812 10:33:12.705921       1 controller_utils.go:151] "Failed to update status for pod" logger="node-lifecycle-controller" pod="kube-system/kube-apiserver-functional-470148" err="Operation cannot be fulfilled on pods \"kube-apiserver-functional-470148\": StorageError: invalid object, Code: 4, Key: /registry/pods/kube-system/kube-apiserver-functional-470148, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: c5774f60-aeeb-42e8-b996-40a18d4353a5, UID in object meta: 8366a459-a799-48ee-a137-2a3b7cab1261"
	E0812 10:33:12.706209       1 node_lifecycle_controller.go:753] unable to mark all pods NotReady on node functional-470148: Operation cannot be fulfilled on pods "kube-apiserver-functional-470148": StorageError: invalid object, Code: 4, Key: /registry/pods/kube-system/kube-apiserver-functional-470148, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: c5774f60-aeeb-42e8-b996-40a18d4353a5, UID in object meta: 8366a459-a799-48ee-a137-2a3b7cab1261; queuing for retry
	I0812 10:33:12.706427       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	E0812 10:33:17.713602       1 node_lifecycle_controller.go:973] "Error updating node" err="Operation cannot be fulfilled on nodes \"functional-470148\": the object has been modified; please apply your changes to the latest version and try again" logger="node-lifecycle-controller" node="functional-470148"
	I0812 10:33:17.737569       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [16616cb9ce5d] <==
	I0812 10:31:26.471691       1 server_linux.go:69] "Using iptables proxy"
	I0812 10:31:26.498389       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.217"]
	I0812 10:31:26.535834       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0812 10:31:26.535875       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0812 10:31:26.535897       1 server_linux.go:165] "Using iptables Proxier"
	I0812 10:31:26.538479       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0812 10:31:26.538946       1 server.go:872] "Version info" version="v1.30.3"
	I0812 10:31:26.539177       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0812 10:31:26.540496       1 config.go:192] "Starting service config controller"
	I0812 10:31:26.540729       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0812 10:31:26.540871       1 config.go:101] "Starting endpoint slice config controller"
	I0812 10:31:26.540937       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0812 10:31:26.541576       1 config.go:319] "Starting node config controller"
	I0812 10:31:26.543101       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0812 10:31:26.641840       1 shared_informer.go:320] Caches are synced for service config
	I0812 10:31:26.641986       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0812 10:31:26.644149       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [c4a5c937b5ec] <==
	I0812 10:32:14.425507       1 server_linux.go:69] "Using iptables proxy"
	I0812 10:32:14.464625       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.217"]
	I0812 10:32:14.520413       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0812 10:32:14.520452       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0812 10:32:14.520471       1 server_linux.go:165] "Using iptables Proxier"
	I0812 10:32:14.524502       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0812 10:32:14.524922       1 server.go:872] "Version info" version="v1.30.3"
	I0812 10:32:14.525253       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0812 10:32:14.526732       1 config.go:192] "Starting service config controller"
	I0812 10:32:14.527469       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0812 10:32:14.527674       1 config.go:101] "Starting endpoint slice config controller"
	I0812 10:32:14.527789       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0812 10:32:14.528684       1 config.go:319] "Starting node config controller"
	I0812 10:32:14.528909       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0812 10:32:14.629606       1 shared_informer.go:320] Caches are synced for node config
	I0812 10:32:14.629878       1 shared_informer.go:320] Caches are synced for service config
	I0812 10:32:14.629909       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [7bdc8c688102] <==
	I0812 10:31:22.448095       1 serving.go:380] Generated self-signed cert in-memory
	W0812 10:31:24.657670       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0812 10:31:24.658098       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0812 10:31:24.658248       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0812 10:31:24.658289       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0812 10:31:24.728787       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0812 10:31:24.729030       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0812 10:31:24.733239       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0812 10:31:24.733566       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0812 10:31:24.733686       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0812 10:31:24.733758       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0812 10:31:24.833931       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0812 10:31:52.526862       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0812 10:31:52.527635       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0812 10:31:52.527976       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [b869b0d288ea] <==
	I0812 10:32:10.828703       1 serving.go:380] Generated self-signed cert in-memory
	W0812 10:32:13.134377       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0812 10:32:13.134596       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0812 10:32:13.134625       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0812 10:32:13.134851       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0812 10:32:13.208602       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0812 10:32:13.210031       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0812 10:32:13.212485       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0812 10:32:13.214537       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0812 10:32:13.214828       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0812 10:32:13.215046       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0812 10:32:13.315960       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 12 10:32:58 functional-470148 kubelet[7395]: I0812 10:32:58.854630    7395 scope.go:117] "RemoveContainer" containerID="b318d7a1a7227ed13fc664e68963eefc7fea3540d3cab5cbf8a1c775b881c1b5"
	Aug 12 10:32:58 functional-470148 kubelet[7395]: E0812 10:32:58.855589    7395 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: b318d7a1a7227ed13fc664e68963eefc7fea3540d3cab5cbf8a1c775b881c1b5" containerID="b318d7a1a7227ed13fc664e68963eefc7fea3540d3cab5cbf8a1c775b881c1b5"
	Aug 12 10:32:58 functional-470148 kubelet[7395]: I0812 10:32:58.855626    7395 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"b318d7a1a7227ed13fc664e68963eefc7fea3540d3cab5cbf8a1c775b881c1b5"} err="failed to get container status \"b318d7a1a7227ed13fc664e68963eefc7fea3540d3cab5cbf8a1c775b881c1b5\": rpc error: code = Unknown desc = Error response from daemon: No such container: b318d7a1a7227ed13fc664e68963eefc7fea3540d3cab5cbf8a1c775b881c1b5"
	Aug 12 10:33:00 functional-470148 kubelet[7395]: E0812 10:33:00.250281    7395 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-470148?timeout=10s\": dial tcp 192.168.39.217:8441: connect: connection refused" interval="7s"
	Aug 12 10:33:00 functional-470148 kubelet[7395]: I0812 10:33:00.376145    7395 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e30d8b817ecaa1cdd5cb7a5d22f1dcb" path="/var/lib/kubelet/pods/2e30d8b817ecaa1cdd5cb7a5d22f1dcb/volumes"
	Aug 12 10:33:04 functional-470148 kubelet[7395]: E0812 10:33:04.445953    7395 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-470148\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-470148?resourceVersion=0&timeout=10s\": dial tcp 192.168.39.217:8441: connect: connection refused"
	Aug 12 10:33:04 functional-470148 kubelet[7395]: E0812 10:33:04.447161    7395 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-470148\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-470148?timeout=10s\": dial tcp 192.168.39.217:8441: connect: connection refused"
	Aug 12 10:33:04 functional-470148 kubelet[7395]: E0812 10:33:04.447658    7395 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-470148\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-470148?timeout=10s\": dial tcp 192.168.39.217:8441: connect: connection refused"
	Aug 12 10:33:04 functional-470148 kubelet[7395]: E0812 10:33:04.448227    7395 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-470148\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-470148?timeout=10s\": dial tcp 192.168.39.217:8441: connect: connection refused"
	Aug 12 10:33:04 functional-470148 kubelet[7395]: E0812 10:33:04.448919    7395 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-470148\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-470148?timeout=10s\": dial tcp 192.168.39.217:8441: connect: connection refused"
	Aug 12 10:33:04 functional-470148 kubelet[7395]: E0812 10:33:04.448987    7395 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	Aug 12 10:33:07 functional-470148 kubelet[7395]: E0812 10:33:07.251922    7395 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-470148?timeout=10s\": dial tcp 192.168.39.217:8441: connect: connection refused" interval="7s"
	Aug 12 10:33:08 functional-470148 kubelet[7395]: I0812 10:33:08.371903    7395 status_manager.go:853] "Failed to get status for pod" podUID="407ce3b9e60bdbc54f8a7242fded82cc" pod="kube-system/kube-scheduler-functional-470148" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-470148\": dial tcp 192.168.39.217:8441: connect: connection refused"
	Aug 12 10:33:08 functional-470148 kubelet[7395]: E0812 10:33:08.405149    7395 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 12 10:33:08 functional-470148 kubelet[7395]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 12 10:33:08 functional-470148 kubelet[7395]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 12 10:33:08 functional-470148 kubelet[7395]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 12 10:33:08 functional-470148 kubelet[7395]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 12 10:33:08 functional-470148 kubelet[7395]: E0812 10:33:08.683229    7395 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events\": dial tcp 192.168.39.217:8441: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-functional-470148.17eaf499af0c5733  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-functional-470148,UID:2e30d8b817ecaa1cdd5cb7a5d22f1dcb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.39.217:8441/readyz\": dial tcp 192.168.39.217:8441: connect: connection refused,Source:EventSource{Component:kubelet,Host:functional-470148,},FirstTimestamp:2024-08-12 10:32:28.326631219 +0000 UTC m=+20.180387632,LastTimestamp:2024-08-12 10:32:28.326631219 +0000 UTC m=+20.1
80387632,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-470148,}"
	Aug 12 10:33:10 functional-470148 kubelet[7395]: I0812 10:33:10.372903    7395 kubelet.go:1917] "Trying to delete pod" pod="kube-system/kube-apiserver-functional-470148" podUID="c5774f60-aeeb-42e8-b996-40a18d4353a5"
	Aug 12 10:33:10 functional-470148 kubelet[7395]: E0812 10:33:10.374133    7395 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-470148\": dial tcp 192.168.39.217:8441: connect: connection refused" pod="kube-system/kube-apiserver-functional-470148"
	Aug 12 10:33:10 functional-470148 kubelet[7395]: I0812 10:33:10.376736    7395 status_manager.go:853] "Failed to get status for pod" podUID="407ce3b9e60bdbc54f8a7242fded82cc" pod="kube-system/kube-scheduler-functional-470148" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-470148\": dial tcp 192.168.39.217:8441: connect: connection refused"
	Aug 12 10:33:10 functional-470148 kubelet[7395]: I0812 10:33:10.937474    7395 kubelet.go:1917] "Trying to delete pod" pod="kube-system/kube-apiserver-functional-470148" podUID="c5774f60-aeeb-42e8-b996-40a18d4353a5"
	Aug 12 10:33:12 functional-470148 kubelet[7395]: I0812 10:33:12.641374    7395 kubelet.go:1922] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-functional-470148"
	Aug 12 10:33:12 functional-470148 kubelet[7395]: I0812 10:33:12.951497    7395 kubelet.go:1917] "Trying to delete pod" pod="kube-system/kube-apiserver-functional-470148" podUID="c5774f60-aeeb-42e8-b996-40a18d4353a5"
	
	
	==> storage-provisioner [1f1124951798] <==
	I0812 10:31:26.338246       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0812 10:31:26.373735       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0812 10:31:26.373810       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0812 10:31:43.788067       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0812 10:31:43.788567       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-470148_26e9557e-bfbe-420d-986e-c75191364b7c!
	I0812 10:31:43.789649       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f202fda3-e383-4a89-984d-ba1a4b34f369", APIVersion:"v1", ResourceVersion:"538", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-470148_26e9557e-bfbe-420d-986e-c75191364b7c became leader
	I0812 10:31:43.890713       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-470148_26e9557e-bfbe-420d-986e-c75191364b7c!
	
	
	==> storage-provisioner [bf47ec590592] <==
	I0812 10:32:14.334686       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0812 10:32:14.391035       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0812 10:32:14.391149       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	E0812 10:32:28.765377       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0812 10:32:31.786216       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0812 10:32:35.435977       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0812 10:32:37.595288       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0812 10:32:39.971773       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0812 10:32:42.205318       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0812 10:32:44.928360       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0812 10:32:48.166155       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0812 10:32:52.120382       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0812 10:32:54.635341       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0812 10:32:57.550134       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0812 10:33:00.314445       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0812 10:33:03.440854       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0812 10:33:06.121760       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0812 10:33:08.826465       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0812 10:33:12.457455       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0812 10:33:14.983893       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0812 10:33:17.472807       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	I0812 10:33:20.354806       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0812 10:33:20.355467       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f202fda3-e383-4a89-984d-ba1a4b34f369", APIVersion:"v1", ResourceVersion:"663", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-470148_da0c0822-cdac-4a26-80c6-e53e53138a39 became leader
	I0812 10:33:20.355586       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-470148_da0c0822-cdac-4a26-80c6-e53e53138a39!
	I0812 10:33:20.456214       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-470148_da0c0822-cdac-4a26-80c6-e53e53138a39!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-470148 -n functional-470148
helpers_test.go:261: (dbg) Run:  kubectl --context functional-470148 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestFunctional/serial/ComponentHealth FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/serial/ComponentHealth (1.87s)

                                                
                                    

Test pass (314/349)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 8.47
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.30.3/json-events 4.31
13 TestDownloadOnly/v1.30.3/preload-exists 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.06
18 TestDownloadOnly/v1.30.3/DeleteAll 0.14
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.13
21 TestDownloadOnly/v1.31.0-rc.0/json-events 6.69
22 TestDownloadOnly/v1.31.0-rc.0/preload-exists 0
26 TestDownloadOnly/v1.31.0-rc.0/LogsDuration 0.06
27 TestDownloadOnly/v1.31.0-rc.0/DeleteAll 0.14
28 TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds 0.12
30 TestBinaryMirror 0.56
31 TestOffline 111.37
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
36 TestAddons/Setup 289.04
38 TestAddons/serial/Volcano 41.74
40 TestAddons/serial/GCPAuth/Namespaces 0.12
42 TestAddons/parallel/Registry 16.79
43 TestAddons/parallel/Ingress 23.04
44 TestAddons/parallel/InspektorGadget 10.72
45 TestAddons/parallel/MetricsServer 5.79
46 TestAddons/parallel/HelmTiller 12.85
48 TestAddons/parallel/CSI 85.68
49 TestAddons/parallel/Headlamp 18.63
50 TestAddons/parallel/CloudSpanner 6.52
51 TestAddons/parallel/LocalPath 43.98
52 TestAddons/parallel/NvidiaDevicePlugin 5.42
53 TestAddons/parallel/Yakd 10.68
54 TestAddons/StoppedEnableDisable 13.61
55 TestCertOptions 121.37
56 TestCertExpiration 339.92
57 TestDockerFlags 121.26
58 TestForceSystemdFlag 71.4
59 TestForceSystemdEnv 110.64
61 TestKVMDriverInstallOrUpdate 4.2
65 TestErrorSpam/setup 54.7
66 TestErrorSpam/start 0.39
67 TestErrorSpam/status 0.86
68 TestErrorSpam/pause 1.33
69 TestErrorSpam/unpause 1.42
70 TestErrorSpam/stop 16.23
73 TestFunctional/serial/CopySyncFile 0
74 TestFunctional/serial/StartWithProxy 108.03
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 43.75
77 TestFunctional/serial/KubeContext 0.05
78 TestFunctional/serial/KubectlGetPods 0.09
81 TestFunctional/serial/CacheCmd/cache/add_remote 2.36
82 TestFunctional/serial/CacheCmd/cache/add_local 1.41
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
84 TestFunctional/serial/CacheCmd/cache/list 0.05
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
86 TestFunctional/serial/CacheCmd/cache/cache_reload 1.27
87 TestFunctional/serial/CacheCmd/cache/delete 0.1
88 TestFunctional/serial/MinikubeKubectlCmd 0.11
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
90 TestFunctional/serial/ExtraConfig 105.14
92 TestFunctional/serial/LogsCmd 1.15
93 TestFunctional/serial/LogsFileCmd 1.17
94 TestFunctional/serial/InvalidService 4.88
96 TestFunctional/parallel/ConfigCmd 0.35
97 TestFunctional/parallel/DashboardCmd 28.83
98 TestFunctional/parallel/DryRun 0.31
99 TestFunctional/parallel/InternationalLanguage 0.14
100 TestFunctional/parallel/StatusCmd 0.86
104 TestFunctional/parallel/ServiceCmdConnect 8.58
105 TestFunctional/parallel/AddonsCmd 0.15
106 TestFunctional/parallel/PersistentVolumeClaim 48.19
108 TestFunctional/parallel/SSHCmd 0.43
109 TestFunctional/parallel/CpCmd 1.44
110 TestFunctional/parallel/MySQL 37.35
111 TestFunctional/parallel/FileSync 0.21
112 TestFunctional/parallel/CertSync 1.4
116 TestFunctional/parallel/NodeLabels 0.07
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0.25
120 TestFunctional/parallel/License 0.21
121 TestFunctional/parallel/ServiceCmd/DeployApp 12.25
122 TestFunctional/parallel/Version/short 0.06
123 TestFunctional/parallel/Version/components 0.96
124 TestFunctional/parallel/DockerEnv/bash 0.94
125 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
126 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.11
127 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
128 TestFunctional/parallel/ImageCommands/ImageListShort 0.21
129 TestFunctional/parallel/ImageCommands/ImageListTable 0.21
130 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
131 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
132 TestFunctional/parallel/ImageCommands/ImageBuild 3.17
133 TestFunctional/parallel/ImageCommands/Setup 1.59
134 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.34
135 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.8
136 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.5
137 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.38
138 TestFunctional/parallel/ImageCommands/ImageRemove 0.44
139 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.08
140 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.71
141 TestFunctional/parallel/ProfileCmd/profile_not_create 0.31
142 TestFunctional/parallel/ProfileCmd/profile_list 0.44
143 TestFunctional/parallel/ProfileCmd/profile_json_output 0.3
153 TestFunctional/parallel/MountCmd/any-port 24.8
154 TestFunctional/parallel/ServiceCmd/List 0.3
155 TestFunctional/parallel/ServiceCmd/JSONOutput 0.27
156 TestFunctional/parallel/ServiceCmd/HTTPS 0.34
157 TestFunctional/parallel/ServiceCmd/Format 0.37
158 TestFunctional/parallel/ServiceCmd/URL 0.33
159 TestFunctional/parallel/MountCmd/specific-port 1.74
160 TestFunctional/parallel/MountCmd/VerifyCleanup 1.48
161 TestFunctional/delete_echo-server_images 0.04
162 TestFunctional/delete_my-image_image 0.02
163 TestFunctional/delete_minikube_cached_images 0.02
164 TestGvisorAddon 197.15
167 TestMultiControlPlane/serial/StartCluster 244.92
168 TestMultiControlPlane/serial/DeployApp 5.67
169 TestMultiControlPlane/serial/PingHostFromPods 1.44
170 TestMultiControlPlane/serial/AddWorkerNode 69.33
171 TestMultiControlPlane/serial/NodeLabels 0.07
172 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.6
173 TestMultiControlPlane/serial/CopyFile 14.66
174 TestMultiControlPlane/serial/StopSecondaryNode 14.05
175 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.54
176 TestMultiControlPlane/serial/RestartSecondaryNode 48.69
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.58
178 TestMultiControlPlane/serial/RestartClusterKeepsNodes 259.02
179 TestMultiControlPlane/serial/DeleteSecondaryNode 8.76
180 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.39
181 TestMultiControlPlane/serial/StopCluster 39.1
182 TestMultiControlPlane/serial/RestartCluster 271.76
183 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.38
184 TestMultiControlPlane/serial/AddSecondaryNode 91
185 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.55
188 TestImageBuild/serial/Setup 52.75
189 TestImageBuild/serial/NormalBuild 2.09
190 TestImageBuild/serial/BuildWithBuildArg 1.14
191 TestImageBuild/serial/BuildWithDockerIgnore 0.84
192 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.8
196 TestJSONOutput/start/Command 70.44
197 TestJSONOutput/start/Audit 0
199 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/pause/Command 0.63
203 TestJSONOutput/pause/Audit 0
205 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
208 TestJSONOutput/unpause/Command 0.6
209 TestJSONOutput/unpause/Audit 0
211 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
212 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
214 TestJSONOutput/stop/Command 7.57
215 TestJSONOutput/stop/Audit 0
217 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
218 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
219 TestErrorJSONOutput 0.21
224 TestMainNoArgs 0.05
225 TestMinikubeProfile 115.4
228 TestMountStart/serial/StartWithMountFirst 31.92
229 TestMountStart/serial/VerifyMountFirst 0.39
230 TestMountStart/serial/StartWithMountSecond 32.78
231 TestMountStart/serial/VerifyMountSecond 0.38
232 TestMountStart/serial/DeleteFirst 1.17
233 TestMountStart/serial/VerifyMountPostDelete 0.4
234 TestMountStart/serial/Stop 2.29
235 TestMountStart/serial/RestartStopped 27.07
236 TestMountStart/serial/VerifyMountPostStop 0.41
239 TestMultiNode/serial/FreshStart2Nodes 140.1
240 TestMultiNode/serial/DeployApp2Nodes 4.45
241 TestMultiNode/serial/PingHostFrom2Pods 0.88
242 TestMultiNode/serial/AddNode 57.27
243 TestMultiNode/serial/MultiNodeLabels 0.07
244 TestMultiNode/serial/ProfileList 0.23
245 TestMultiNode/serial/CopyFile 7.68
246 TestMultiNode/serial/StopNode 3.42
247 TestMultiNode/serial/StartAfterStop 43.69
248 TestMultiNode/serial/RestartKeepsNodes 179.9
249 TestMultiNode/serial/DeleteNode 2.55
250 TestMultiNode/serial/StopMultiNode 25.12
251 TestMultiNode/serial/RestartMultiNode 125.44
252 TestMultiNode/serial/ValidateNameConflict 54.56
257 TestPreload 202.63
259 TestScheduledStopUnix 123.02
260 TestSkaffold 140.05
263 TestRunningBinaryUpgrade 229.14
265 TestKubernetesUpgrade 220.44
267 TestStoppedBinaryUpgrade/Setup 0.61
268 TestStoppedBinaryUpgrade/Upgrade 219.41
277 TestPause/serial/Start 134.46
278 TestStoppedBinaryUpgrade/MinikubeLogs 1.58
280 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
281 TestNoKubernetes/serial/StartWithK8s 54.37
293 TestPause/serial/SecondStartNoReconfiguration 111.15
294 TestNoKubernetes/serial/StartWithStopK8s 50.92
295 TestNoKubernetes/serial/Start 32.64
296 TestPause/serial/Pause 1.17
297 TestPause/serial/VerifyStatus 0.28
298 TestPause/serial/Unpause 0.63
299 TestNoKubernetes/serial/VerifyK8sNotRunning 0.24
300 TestNoKubernetes/serial/ProfileList 1.12
301 TestPause/serial/PauseAgain 0.82
302 TestNoKubernetes/serial/Stop 2.31
303 TestPause/serial/DeletePaused 1.09
304 TestPause/serial/VerifyDeletedResources 4.58
305 TestNoKubernetes/serial/StartNoArgs 66.74
306 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.22
308 TestStartStop/group/old-k8s-version/serial/FirstStart 239.86
310 TestStartStop/group/no-preload/serial/FirstStart 136.77
312 TestStartStop/group/embed-certs/serial/FirstStart 132.77
313 TestStartStop/group/no-preload/serial/DeployApp 9.4
314 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.11
315 TestStartStop/group/no-preload/serial/Stop 13.38
316 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
317 TestStartStop/group/no-preload/serial/SecondStart 313.89
318 TestStartStop/group/old-k8s-version/serial/DeployApp 8.54
319 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.34
320 TestStartStop/group/embed-certs/serial/DeployApp 9.45
321 TestStartStop/group/old-k8s-version/serial/Stop 13.39
322 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.06
323 TestStartStop/group/embed-certs/serial/Stop 13.35
324 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
325 TestStartStop/group/old-k8s-version/serial/SecondStart 525.31
326 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
327 TestStartStop/group/embed-certs/serial/SecondStart 337.52
329 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 110.89
330 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.37
331 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.13
332 TestStartStop/group/default-k8s-diff-port/serial/Stop 13.35
333 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
334 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 320.26
335 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
336 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
337 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.22
338 TestStartStop/group/no-preload/serial/Pause 2.74
340 TestStartStop/group/newest-cni/serial/FirstStart 72.04
341 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 16.01
342 TestStartStop/group/newest-cni/serial/DeployApp 0
343 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1
344 TestStartStop/group/newest-cni/serial/Stop 8.36
345 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
346 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
347 TestStartStop/group/newest-cni/serial/SecondStart 44.12
348 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
349 TestStartStop/group/embed-certs/serial/Pause 2.9
350 TestNetworkPlugins/group/auto/Start 134.19
351 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
352 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
353 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.22
354 TestStartStop/group/newest-cni/serial/Pause 2.57
355 TestNetworkPlugins/group/kindnet/Start 105.3
356 TestNetworkPlugins/group/auto/KubeletFlags 0.22
357 TestNetworkPlugins/group/auto/NetCatPod 12.25
358 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
359 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
360 TestNetworkPlugins/group/auto/DNS 0.19
361 TestNetworkPlugins/group/auto/Localhost 0.16
362 TestNetworkPlugins/group/auto/HairPin 0.16
363 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.1
364 TestNetworkPlugins/group/kindnet/KubeletFlags 0.23
365 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
366 TestNetworkPlugins/group/kindnet/NetCatPod 13.28
367 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.32
368 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.17
369 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.12
370 TestNetworkPlugins/group/calico/Start 106.44
371 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.27
372 TestStartStop/group/old-k8s-version/serial/Pause 3.07
373 TestNetworkPlugins/group/custom-flannel/Start 114.03
374 TestNetworkPlugins/group/kindnet/DNS 0.26
375 TestNetworkPlugins/group/kindnet/Localhost 0.16
376 TestNetworkPlugins/group/kindnet/HairPin 0.21
377 TestNetworkPlugins/group/false/Start 134.39
378 TestNetworkPlugins/group/enable-default-cni/Start 149.85
379 TestNetworkPlugins/group/calico/ControllerPod 6.01
380 TestNetworkPlugins/group/calico/KubeletFlags 0.24
381 TestNetworkPlugins/group/calico/NetCatPod 13.26
382 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.24
383 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.34
384 TestNetworkPlugins/group/calico/DNS 0.21
385 TestNetworkPlugins/group/calico/Localhost 0.21
386 TestNetworkPlugins/group/calico/HairPin 0.2
387 TestNetworkPlugins/group/custom-flannel/DNS 0.24
388 TestNetworkPlugins/group/custom-flannel/Localhost 0.54
389 TestNetworkPlugins/group/custom-flannel/HairPin 0.19
390 TestNetworkPlugins/group/false/KubeletFlags 0.29
391 TestNetworkPlugins/group/false/NetCatPod 13.45
392 TestNetworkPlugins/group/flannel/Start 87.87
393 TestNetworkPlugins/group/bridge/Start 135.41
394 TestNetworkPlugins/group/false/DNS 0.2
395 TestNetworkPlugins/group/false/Localhost 0.17
396 TestNetworkPlugins/group/false/HairPin 0.15
397 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.25
398 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.29
399 TestNetworkPlugins/group/kubenet/Start 140.38
400 TestNetworkPlugins/group/enable-default-cni/DNS 0.23
401 TestNetworkPlugins/group/enable-default-cni/Localhost 0.2
402 TestNetworkPlugins/group/enable-default-cni/HairPin 0.22
403 TestNetworkPlugins/group/flannel/ControllerPod 6.02
404 TestNetworkPlugins/group/flannel/KubeletFlags 0.47
405 TestNetworkPlugins/group/flannel/NetCatPod 11.53
406 TestNetworkPlugins/group/flannel/DNS 0.22
407 TestNetworkPlugins/group/flannel/Localhost 0.21
408 TestNetworkPlugins/group/flannel/HairPin 0.18
409 TestNetworkPlugins/group/bridge/KubeletFlags 0.23
410 TestNetworkPlugins/group/bridge/NetCatPod 10.31
411 TestNetworkPlugins/group/bridge/DNS 0.18
412 TestNetworkPlugins/group/bridge/Localhost 0.15
413 TestNetworkPlugins/group/bridge/HairPin 0.15
414 TestNetworkPlugins/group/kubenet/KubeletFlags 0.38
415 TestNetworkPlugins/group/kubenet/NetCatPod 10.34
416 TestNetworkPlugins/group/kubenet/DNS 0.19
417 TestNetworkPlugins/group/kubenet/Localhost 0.14
418 TestNetworkPlugins/group/kubenet/HairPin 0.13
x
+
TestDownloadOnly/v1.20.0/json-events (8.47s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-428170 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-428170 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=kvm2 : (8.466457308s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (8.47s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-428170
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-428170: exit status 85 (62.05302ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-428170 | jenkins | v1.33.1 | 12 Aug 24 10:20 UTC |          |
	|         | -p download-only-428170        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/12 10:20:06
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0812 10:20:06.168407   10980 out.go:291] Setting OutFile to fd 1 ...
	I0812 10:20:06.168561   10980 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 10:20:06.168570   10980 out.go:304] Setting ErrFile to fd 2...
	I0812 10:20:06.168574   10980 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 10:20:06.168789   10980 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19409-3796/.minikube/bin
	W0812 10:20:06.168916   10980 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19409-3796/.minikube/config/config.json: open /home/jenkins/minikube-integration/19409-3796/.minikube/config/config.json: no such file or directory
	I0812 10:20:06.169544   10980 out.go:298] Setting JSON to true
	I0812 10:20:06.170558   10980 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":154,"bootTime":1723457852,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0812 10:20:06.170640   10980 start.go:139] virtualization: kvm guest
	I0812 10:20:06.173563   10980 out.go:97] [download-only-428170] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0812 10:20:06.173777   10980 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19409-3796/.minikube/cache/preloaded-tarball: no such file or directory
	I0812 10:20:06.173827   10980 notify.go:220] Checking for updates...
	I0812 10:20:06.175719   10980 out.go:169] MINIKUBE_LOCATION=19409
	I0812 10:20:06.177591   10980 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 10:20:06.179421   10980 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19409-3796/kubeconfig
	I0812 10:20:06.181194   10980 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19409-3796/.minikube
	I0812 10:20:06.182780   10980 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0812 10:20:06.185650   10980 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0812 10:20:06.186096   10980 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 10:20:06.304476   10980 out.go:97] Using the kvm2 driver based on user configuration
	I0812 10:20:06.304521   10980 start.go:297] selected driver: kvm2
	I0812 10:20:06.304528   10980 start.go:901] validating driver "kvm2" against <nil>
	I0812 10:20:06.304872   10980 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 10:20:06.304994   10980 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19409-3796/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0812 10:20:06.321426   10980 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0812 10:20:06.321494   10980 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0812 10:20:06.322199   10980 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0812 10:20:06.322406   10980 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0812 10:20:06.322438   10980 cni.go:84] Creating CNI manager for ""
	I0812 10:20:06.322453   10980 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0812 10:20:06.322531   10980 start.go:340] cluster config:
	{Name:download-only-428170 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-428170 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 10:20:06.322812   10980 iso.go:125] acquiring lock: {Name:mk12273493f47d7003f1469d85b691a3ad57d0c1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 10:20:06.325186   10980 out.go:97] Downloading VM boot image ...
	I0812 10:20:06.325226   10980 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19409-3796/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso
	I0812 10:20:08.858179   10980 out.go:97] Starting "download-only-428170" primary control-plane node in "download-only-428170" cluster
	I0812 10:20:08.858218   10980 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0812 10:20:08.880515   10980 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0812 10:20:08.880546   10980 cache.go:56] Caching tarball of preloaded images
	I0812 10:20:08.880686   10980 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0812 10:20:08.882478   10980 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0812 10:20:08.882502   10980 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0812 10:20:08.909227   10980 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /home/jenkins/minikube-integration/19409-3796/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-428170 host does not exist
	  To start a cluster, run: "minikube start -p download-only-428170"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-428170
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (4.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-318025 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-318025 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=kvm2 : (4.31019773s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (4.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-318025
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-318025: exit status 85 (63.507777ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-428170 | jenkins | v1.33.1 | 12 Aug 24 10:20 UTC |                     |
	|         | -p download-only-428170        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 12 Aug 24 10:20 UTC | 12 Aug 24 10:20 UTC |
	| delete  | -p download-only-428170        | download-only-428170 | jenkins | v1.33.1 | 12 Aug 24 10:20 UTC | 12 Aug 24 10:20 UTC |
	| start   | -o=json --download-only        | download-only-318025 | jenkins | v1.33.1 | 12 Aug 24 10:20 UTC |                     |
	|         | -p download-only-318025        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/12 10:20:14
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0812 10:20:14.954808   11184 out.go:291] Setting OutFile to fd 1 ...
	I0812 10:20:14.954943   11184 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 10:20:14.954953   11184 out.go:304] Setting ErrFile to fd 2...
	I0812 10:20:14.954958   11184 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 10:20:14.955171   11184 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19409-3796/.minikube/bin
	I0812 10:20:14.955717   11184 out.go:298] Setting JSON to true
	I0812 10:20:14.956571   11184 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":163,"bootTime":1723457852,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0812 10:20:14.956630   11184 start.go:139] virtualization: kvm guest
	I0812 10:20:14.959041   11184 out.go:97] [download-only-318025] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0812 10:20:14.959180   11184 notify.go:220] Checking for updates...
	I0812 10:20:14.960850   11184 out.go:169] MINIKUBE_LOCATION=19409
	I0812 10:20:14.962196   11184 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 10:20:14.963500   11184 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19409-3796/kubeconfig
	I0812 10:20:14.964985   11184 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19409-3796/.minikube
	I0812 10:20:14.966420   11184 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-318025 host does not exist
	  To start a cluster, run: "minikube start -p download-only-318025"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-318025
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/json-events (6.69s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-276724 --force --alsologtostderr --kubernetes-version=v1.31.0-rc.0 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-276724 --force --alsologtostderr --kubernetes-version=v1.31.0-rc.0 --container-runtime=docker --driver=kvm2 : (6.692864202s)
--- PASS: TestDownloadOnly/v1.31.0-rc.0/json-events (6.69s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-276724
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-276724: exit status 85 (58.847569ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-428170 | jenkins | v1.33.1 | 12 Aug 24 10:20 UTC |                     |
	|         | -p download-only-428170           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.1 | 12 Aug 24 10:20 UTC | 12 Aug 24 10:20 UTC |
	| delete  | -p download-only-428170           | download-only-428170 | jenkins | v1.33.1 | 12 Aug 24 10:20 UTC | 12 Aug 24 10:20 UTC |
	| start   | -o=json --download-only           | download-only-318025 | jenkins | v1.33.1 | 12 Aug 24 10:20 UTC |                     |
	|         | -p download-only-318025           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.1 | 12 Aug 24 10:20 UTC | 12 Aug 24 10:20 UTC |
	| delete  | -p download-only-318025           | download-only-318025 | jenkins | v1.33.1 | 12 Aug 24 10:20 UTC | 12 Aug 24 10:20 UTC |
	| start   | -o=json --download-only           | download-only-276724 | jenkins | v1.33.1 | 12 Aug 24 10:20 UTC |                     |
	|         | -p download-only-276724           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0 |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/12 10:20:19
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0812 10:20:19.591356   11369 out.go:291] Setting OutFile to fd 1 ...
	I0812 10:20:19.591462   11369 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 10:20:19.591466   11369 out.go:304] Setting ErrFile to fd 2...
	I0812 10:20:19.591471   11369 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 10:20:19.591731   11369 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19409-3796/.minikube/bin
	I0812 10:20:19.592352   11369 out.go:298] Setting JSON to true
	I0812 10:20:19.593179   11369 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":168,"bootTime":1723457852,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0812 10:20:19.593242   11369 start.go:139] virtualization: kvm guest
	I0812 10:20:19.595097   11369 out.go:97] [download-only-276724] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0812 10:20:19.595244   11369 notify.go:220] Checking for updates...
	I0812 10:20:19.596476   11369 out.go:169] MINIKUBE_LOCATION=19409
	I0812 10:20:19.597537   11369 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 10:20:19.598725   11369 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19409-3796/kubeconfig
	I0812 10:20:19.600241   11369 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19409-3796/.minikube
	I0812 10:20:19.601548   11369 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0812 10:20:19.604201   11369 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0812 10:20:19.604450   11369 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 10:20:19.637189   11369 out.go:97] Using the kvm2 driver based on user configuration
	I0812 10:20:19.637221   11369 start.go:297] selected driver: kvm2
	I0812 10:20:19.637228   11369 start.go:901] validating driver "kvm2" against <nil>
	I0812 10:20:19.637567   11369 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 10:20:19.637664   11369 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19409-3796/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0812 10:20:19.653002   11369 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0812 10:20:19.653093   11369 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0812 10:20:19.653588   11369 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0812 10:20:19.653737   11369 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0812 10:20:19.653766   11369 cni.go:84] Creating CNI manager for ""
	I0812 10:20:19.653782   11369 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0812 10:20:19.653797   11369 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0812 10:20:19.653872   11369 start.go:340] cluster config:
	{Name:download-only-276724 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:download-only-276724 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 10:20:19.653993   11369 iso.go:125] acquiring lock: {Name:mk12273493f47d7003f1469d85b691a3ad57d0c1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 10:20:19.655702   11369 out.go:97] Starting "download-only-276724" primary control-plane node in "download-only-276724" cluster
	I0812 10:20:19.655736   11369 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0812 10:20:19.679342   11369 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-rc.0/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-amd64.tar.lz4
	I0812 10:20:19.679384   11369 cache.go:56] Caching tarball of preloaded images
	I0812 10:20:19.679551   11369 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0812 10:20:19.681436   11369 out.go:97] Downloading Kubernetes v1.31.0-rc.0 preload ...
	I0812 10:20:19.681467   11369 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-amd64.tar.lz4 ...
	I0812 10:20:19.708453   11369 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-rc.0/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-amd64.tar.lz4?checksum=md5:214beb6d5aadd59deaf940ce47a22f04 -> /home/jenkins/minikube-integration/19409-3796/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-amd64.tar.lz4
	I0812 10:20:21.819778   11369 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-amd64.tar.lz4 ...
	I0812 10:20:21.819899   11369 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19409-3796/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-amd64.tar.lz4 ...
	I0812 10:20:23.140503   11369 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-rc.0 on docker
	I0812 10:20:23.140958   11369 profile.go:143] Saving config to /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/download-only-276724/config.json ...
	I0812 10:20:23.140997   11369 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/download-only-276724/config.json: {Name:mk8880eeeab92668353cd5427adc68ec68d7368b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 10:20:23.141181   11369 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0812 10:20:23.141356   11369 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19409-3796/.minikube/cache/linux/amd64/v1.31.0-rc.0/kubectl
	
	
	* The control-plane node download-only-276724 host does not exist
	  To start a cluster, run: "minikube start -p download-only-276724"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-rc.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-rc.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-276724
--- PASS: TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.56s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-972022 --alsologtostderr --binary-mirror http://127.0.0.1:40889 --driver=kvm2 
helpers_test.go:175: Cleaning up "binary-mirror-972022" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-972022
--- PASS: TestBinaryMirror (0.56s)

                                                
                                    
x
+
TestOffline (111.37s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-245085 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-245085 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 : (1m50.342040306s)
helpers_test.go:175: Cleaning up "offline-docker-245085" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-245085
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-245085: (1.024375654s)
--- PASS: TestOffline (111.37s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-705597
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-705597: exit status 85 (47.767114ms)

                                                
                                                
-- stdout --
	* Profile "addons-705597" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-705597"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-705597
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-705597: exit status 85 (47.515889ms)

                                                
                                                
-- stdout --
	* Profile "addons-705597" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-705597"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (289.04s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-705597 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-705597 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (4m49.043901981s)
--- PASS: TestAddons/Setup (289.04s)

                                                
                                    
x
+
TestAddons/serial/Volcano (41.74s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:897: volcano-scheduler stabilized in 14.320288ms
addons_test.go:913: volcano-controller stabilized in 14.400521ms
addons_test.go:905: volcano-admission stabilized in 15.75607ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-844f6db89b-764rv" [167a7ff3-a61d-43a9-a7bf-1d0b3e3e3e13] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.004561704s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5f7844f7bc-5fxv9" [2755e3e2-88b8-4539-a8cb-01152857d693] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.004004062s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-59cb4746db-5q9zv" [7c2b1e21-1068-4858-937d-c704a40e78de] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.005162977s
addons_test.go:932: (dbg) Run:  kubectl --context addons-705597 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-705597 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-705597 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [a601210d-cc6e-46dc-9eec-bcd7dbb9f1b1] Pending
helpers_test.go:344: "test-job-nginx-0" [a601210d-cc6e-46dc-9eec-bcd7dbb9f1b1] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [a601210d-cc6e-46dc-9eec-bcd7dbb9f1b1] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 15.004707272s
addons_test.go:968: (dbg) Run:  out/minikube-linux-amd64 -p addons-705597 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-linux-amd64 -p addons-705597 addons disable volcano --alsologtostderr -v=1: (10.343298162s)
--- PASS: TestAddons/serial/Volcano (41.74s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-705597 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-705597 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.79s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 6.364997ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-698f998955-tlpnx" [b7c95a89-ca18-40e1-91fe-2c63e801d91c] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.005745584s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-qstpw" [394d45bd-c0df-45d4-9043-84ba5542be0e] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.00555703s
addons_test.go:342: (dbg) Run:  kubectl --context addons-705597 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-705597 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-705597 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.821719626s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-705597 ip
2024/08/12 10:26:32 [DEBUG] GET http://192.168.39.27:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-705597 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.79s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (23.04s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-705597 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-705597 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-705597 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [7215c8da-e5d2-4448-b27e-a56a62113739] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [7215c8da-e5d2-4448-b27e-a56a62113739] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.004795721s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-705597 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-705597 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-705597 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.27
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-705597 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-705597 addons disable ingress-dns --alsologtostderr -v=1: (2.422853487s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-705597 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-705597 addons disable ingress --alsologtostderr -v=1: (7.893932072s)
--- PASS: TestAddons/parallel/Ingress (23.04s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.72s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-4qd2z" [6fd22128-3172-4958-a822-8ed02d1ed55d] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.00532331s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-705597
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-705597: (5.714505222s)
--- PASS: TestAddons/parallel/InspektorGadget (10.72s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.79s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.77908ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-xvqgb" [e0eafcd0-95a0-4513-9996-c94d8d17596d] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.008149761s
addons_test.go:417: (dbg) Run:  kubectl --context addons-705597 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-705597 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.79s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (12.85s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 2.877054ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-85h9k" [40a0ffc7-127e-4bcb-a6d2-26dd4605c02b] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.005565564s
addons_test.go:475: (dbg) Run:  kubectl --context addons-705597 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-705597 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (6.242562124s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-705597 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (12.85s)

                                                
                                    
x
+
TestAddons/parallel/CSI (85.68s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 21.165953ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-705597 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-705597 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-705597 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-705597 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-705597 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-705597 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-705597 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-705597 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-705597 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-705597 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-705597 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-705597 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-705597 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-705597 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-705597 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-705597 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-705597 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-705597 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-705597 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-705597 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-705597 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-705597 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-705597 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-705597 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-705597 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-705597 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-705597 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-705597 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-705597 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-705597 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-705597 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-705597 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-705597 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-705597 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-705597 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-705597 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-705597 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [5cdf5ab6-7b2b-40f0-8a4b-2127bbfed2f7] Pending
helpers_test.go:344: "task-pv-pod" [5cdf5ab6-7b2b-40f0-8a4b-2127bbfed2f7] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [5cdf5ab6-7b2b-40f0-8a4b-2127bbfed2f7] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.004530003s
addons_test.go:590: (dbg) Run:  kubectl --context addons-705597 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-705597 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-705597 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-705597 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-705597 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-705597 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-705597 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-705597 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-705597 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-705597 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-705597 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-705597 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-705597 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-705597 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-705597 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-705597 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-705597 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-705597 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-705597 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-705597 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-705597 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-705597 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-705597 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-705597 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-705597 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-705597 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-705597 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-705597 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [2e26e493-38b8-4fc5-908e-603f1c743860] Pending
helpers_test.go:344: "task-pv-pod-restore" [2e26e493-38b8-4fc5-908e-603f1c743860] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [2e26e493-38b8-4fc5-908e-603f1c743860] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004290784s
addons_test.go:632: (dbg) Run:  kubectl --context addons-705597 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Done: kubectl --context addons-705597 delete pod task-pv-pod-restore: (1.172986665s)
addons_test.go:636: (dbg) Run:  kubectl --context addons-705597 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-705597 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-705597 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p addons-705597 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.768473873s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p addons-705597 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (85.68s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.63s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-705597 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-9d868696f-bjngb" [45eb7730-3878-4d8a-98c9-3dc01db40397] Pending
helpers_test.go:344: "headlamp-9d868696f-bjngb" [45eb7730-3878-4d8a-98c9-3dc01db40397] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-9d868696f-bjngb" [45eb7730-3878-4d8a-98c9-3dc01db40397] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-9d868696f-bjngb" [45eb7730-3878-4d8a-98c9-3dc01db40397] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.004763765s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-705597 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 -p addons-705597 addons disable headlamp --alsologtostderr -v=1: (5.79059582s)
--- PASS: TestAddons/parallel/Headlamp (18.63s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.52s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5455fb9b69-zlb6q" [6ec96d2a-14b1-4cf6-a0c5-68fa970c1331] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.006261143s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-705597
--- PASS: TestAddons/parallel/CloudSpanner (6.52s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (43.98s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-705597 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-705597 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-705597 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-705597 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-705597 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-705597 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-705597 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-705597 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [ad5478e0-0247-40bc-8d24-39e0a0729ee3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [ad5478e0-0247-40bc-8d24-39e0a0729ee3] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [ad5478e0-0247-40bc-8d24-39e0a0729ee3] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.00499251s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-705597 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-amd64 -p addons-705597 ssh "cat /opt/local-path-provisioner/pvc-5a9f9de9-7220-4e12-ab71-d7f37729b116_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-705597 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-705597 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p addons-705597 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-amd64 -p addons-705597 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (34.073003922s)
--- PASS: TestAddons/parallel/LocalPath (43.98s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.42s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-xv67k" [70196903-55f3-4bd6-b908-5a49487117d0] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.005168192s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-705597
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.42s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.68s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-vx8g8" [3c510374-c97a-445b-9363-e800ffec0d06] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004755003s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-705597 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-705597 addons disable yakd --alsologtostderr -v=1: (5.670879847s)
--- PASS: TestAddons/parallel/Yakd (10.68s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (13.61s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-705597
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-705597: (13.325310981s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-705597
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-705597
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-705597
--- PASS: TestAddons/StoppedEnableDisable (13.61s)

                                                
                                    
x
+
TestCertOptions (121.37s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-844513 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-844513 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 : (1m59.52669479s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-844513 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-844513 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-844513 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-844513" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-844513
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-844513: (1.277440346s)
--- PASS: TestCertOptions (121.37s)

                                                
                                    
x
+
TestCertExpiration (339.92s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-609716 --memory=2048 --cert-expiration=3m --driver=kvm2 
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-609716 --memory=2048 --cert-expiration=3m --driver=kvm2 : (1m41.439118851s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-609716 --memory=2048 --cert-expiration=8760h --driver=kvm2 
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-609716 --memory=2048 --cert-expiration=8760h --driver=kvm2 : (57.329965383s)
helpers_test.go:175: Cleaning up "cert-expiration-609716" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-609716
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-609716: (1.151932681s)
--- PASS: TestCertExpiration (339.92s)

                                                
                                    
x
+
TestDockerFlags (121.26s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-236106 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 
E0812 11:23:04.964814   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/skaffold-628488/client.crt: no such file or directory
E0812 11:23:43.738191   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/functional-470148/client.crt: no such file or directory
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-236106 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 : (1m58.87664628s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-236106 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-236106 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-236106" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-236106
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-236106: (1.881583911s)
--- PASS: TestDockerFlags (121.26s)

                                                
                                    
x
+
TestForceSystemdFlag (71.4s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-901957 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-901957 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 : (1m9.955667842s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-901957 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-901957" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-901957
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-901957: (1.154027612s)
--- PASS: TestForceSystemdFlag (71.40s)

                                                
                                    
x
+
TestForceSystemdEnv (110.64s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-755794 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-755794 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 : (1m49.243603622s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-755794 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-755794" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-755794
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-755794: (1.099170207s)
--- PASS: TestForceSystemdEnv (110.64s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.2s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (4.20s)

                                                
                                    
x
+
TestErrorSpam/setup (54.7s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-338210 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-338210 --driver=kvm2 
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-338210 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-338210 --driver=kvm2 : (54.698434578s)
--- PASS: TestErrorSpam/setup (54.70s)

                                                
                                    
x
+
TestErrorSpam/start (0.39s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-338210 --log_dir /tmp/nospam-338210 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-338210 --log_dir /tmp/nospam-338210 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-338210 --log_dir /tmp/nospam-338210 start --dry-run
--- PASS: TestErrorSpam/start (0.39s)

                                                
                                    
x
+
TestErrorSpam/status (0.86s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-338210 --log_dir /tmp/nospam-338210 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-338210 --log_dir /tmp/nospam-338210 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-338210 --log_dir /tmp/nospam-338210 status
--- PASS: TestErrorSpam/status (0.86s)

                                                
                                    
x
+
TestErrorSpam/pause (1.33s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-338210 --log_dir /tmp/nospam-338210 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-338210 --log_dir /tmp/nospam-338210 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-338210 --log_dir /tmp/nospam-338210 pause
--- PASS: TestErrorSpam/pause (1.33s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.42s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-338210 --log_dir /tmp/nospam-338210 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-338210 --log_dir /tmp/nospam-338210 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-338210 --log_dir /tmp/nospam-338210 unpause
--- PASS: TestErrorSpam/unpause (1.42s)

                                                
                                    
x
+
TestErrorSpam/stop (16.23s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-338210 --log_dir /tmp/nospam-338210 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-338210 --log_dir /tmp/nospam-338210 stop: (12.561099326s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-338210 --log_dir /tmp/nospam-338210 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-338210 --log_dir /tmp/nospam-338210 stop: (1.607018781s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-338210 --log_dir /tmp/nospam-338210 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-338210 --log_dir /tmp/nospam-338210 stop: (2.06252306s)
--- PASS: TestErrorSpam/stop (16.23s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19409-3796/.minikube/files/etc/test/nested/copy/10968/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (108.03s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-470148 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 
E0812 10:30:16.594867   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/addons-705597/client.crt: no such file or directory
E0812 10:30:16.600729   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/addons-705597/client.crt: no such file or directory
E0812 10:30:16.611032   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/addons-705597/client.crt: no such file or directory
E0812 10:30:16.631456   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/addons-705597/client.crt: no such file or directory
E0812 10:30:16.671758   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/addons-705597/client.crt: no such file or directory
E0812 10:30:16.752147   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/addons-705597/client.crt: no such file or directory
E0812 10:30:16.912629   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/addons-705597/client.crt: no such file or directory
E0812 10:30:17.233277   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/addons-705597/client.crt: no such file or directory
E0812 10:30:17.873546   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/addons-705597/client.crt: no such file or directory
E0812 10:30:19.154363   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/addons-705597/client.crt: no such file or directory
E0812 10:30:21.715274   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/addons-705597/client.crt: no such file or directory
E0812 10:30:26.836001   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/addons-705597/client.crt: no such file or directory
E0812 10:30:37.077078   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/addons-705597/client.crt: no such file or directory
E0812 10:30:57.557633   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/addons-705597/client.crt: no such file or directory
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-470148 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 : (1m48.029006601s)
--- PASS: TestFunctional/serial/StartWithProxy (108.03s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (43.75s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-470148 --alsologtostderr -v=8
E0812 10:31:38.518064   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/addons-705597/client.crt: no such file or directory
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-470148 --alsologtostderr -v=8: (43.749149219s)
functional_test.go:663: soft start took 43.750209806s for "functional-470148" cluster.
--- PASS: TestFunctional/serial/SoftStart (43.75s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-470148 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.36s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.36s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.41s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-470148 /tmp/TestFunctionalserialCacheCmdcacheadd_local3000836848/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 cache add minikube-local-cache-test:functional-470148
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 cache delete minikube-local-cache-test:functional-470148
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-470148
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.41s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-470148 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (226.571845ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 kubectl -- --context functional-470148 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-470148 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (105.14s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-470148 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0812 10:33:00.438859   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/addons-705597/client.crt: no such file or directory
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-470148 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m45.140001698s)
functional_test.go:761: restart took 1m45.140224024s for "functional-470148" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (105.14s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.15s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-470148 logs: (1.146214032s)
--- PASS: TestFunctional/serial/LogsCmd (1.15s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.17s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 logs --file /tmp/TestFunctionalserialLogsFileCmd4096749815/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-470148 logs --file /tmp/TestFunctionalserialLogsFileCmd4096749815/001/logs.txt: (1.166575285s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.17s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.88s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-470148 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-470148
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-470148: exit status 115 (282.817384ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.217:31764 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-470148 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-470148 delete -f testdata/invalidsvc.yaml: (1.350276104s)
--- PASS: TestFunctional/serial/InvalidService (4.88s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-470148 config get cpus: exit status 14 (59.089673ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-470148 config get cpus: exit status 14 (52.006673ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (28.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-470148 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-470148 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 20749: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (28.83s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-470148 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-470148 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (153.63169ms)

                                                
                                                
-- stdout --
	* [functional-470148] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19409-3796/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19409-3796/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 10:33:57.393856   20657 out.go:291] Setting OutFile to fd 1 ...
	I0812 10:33:57.393975   20657 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 10:33:57.393989   20657 out.go:304] Setting ErrFile to fd 2...
	I0812 10:33:57.393994   20657 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 10:33:57.394237   20657 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19409-3796/.minikube/bin
	I0812 10:33:57.394773   20657 out.go:298] Setting JSON to false
	I0812 10:33:57.395843   20657 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":985,"bootTime":1723457852,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0812 10:33:57.395911   20657 start.go:139] virtualization: kvm guest
	I0812 10:33:57.398351   20657 out.go:177] * [functional-470148] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0812 10:33:57.399976   20657 notify.go:220] Checking for updates...
	I0812 10:33:57.399992   20657 out.go:177]   - MINIKUBE_LOCATION=19409
	I0812 10:33:57.401537   20657 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 10:33:57.403221   20657 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19409-3796/kubeconfig
	I0812 10:33:57.404869   20657 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19409-3796/.minikube
	I0812 10:33:57.406441   20657 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0812 10:33:57.407826   20657 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0812 10:33:57.409564   20657 config.go:182] Loaded profile config "functional-470148": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0812 10:33:57.409959   20657 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0812 10:33:57.410050   20657 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:33:57.426859   20657 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44689
	I0812 10:33:57.427432   20657 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:33:57.428080   20657 main.go:141] libmachine: Using API Version  1
	I0812 10:33:57.428109   20657 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:33:57.428520   20657 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:33:57.428754   20657 main.go:141] libmachine: (functional-470148) Calling .DriverName
	I0812 10:33:57.428995   20657 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 10:33:57.429300   20657 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0812 10:33:57.429339   20657 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:33:57.450410   20657 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41333
	I0812 10:33:57.450835   20657 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:33:57.451461   20657 main.go:141] libmachine: Using API Version  1
	I0812 10:33:57.451487   20657 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:33:57.452156   20657 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:33:57.452472   20657 main.go:141] libmachine: (functional-470148) Calling .DriverName
	I0812 10:33:57.493942   20657 out.go:177] * Using the kvm2 driver based on existing profile
	I0812 10:33:57.495766   20657 start.go:297] selected driver: kvm2
	I0812 10:33:57.495797   20657 start.go:901] validating driver "kvm2" against &{Name:functional-470148 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:functional-470148 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 10:33:57.496067   20657 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0812 10:33:57.499407   20657 out.go:177] 
	W0812 10:33:57.501057   20657 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0812 10:33:57.502473   20657 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-470148 --dry-run --alsologtostderr -v=1 --driver=kvm2 
--- PASS: TestFunctional/parallel/DryRun (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-470148 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-470148 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (143.102157ms)

                                                
                                                
-- stdout --
	* [functional-470148] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19409-3796/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19409-3796/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 10:34:22.410764   21347 out.go:291] Setting OutFile to fd 1 ...
	I0812 10:34:22.411114   21347 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 10:34:22.411127   21347 out.go:304] Setting ErrFile to fd 2...
	I0812 10:34:22.411133   21347 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 10:34:22.411483   21347 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19409-3796/.minikube/bin
	I0812 10:34:22.412014   21347 out.go:298] Setting JSON to false
	I0812 10:34:22.412977   21347 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":1010,"bootTime":1723457852,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0812 10:34:22.413046   21347 start.go:139] virtualization: kvm guest
	I0812 10:34:22.415386   21347 out.go:177] * [functional-470148] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0812 10:34:22.416940   21347 notify.go:220] Checking for updates...
	I0812 10:34:22.416964   21347 out.go:177]   - MINIKUBE_LOCATION=19409
	I0812 10:34:22.418320   21347 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 10:34:22.419668   21347 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19409-3796/kubeconfig
	I0812 10:34:22.420957   21347 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19409-3796/.minikube
	I0812 10:34:22.422348   21347 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0812 10:34:22.423714   21347 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0812 10:34:22.425225   21347 config.go:182] Loaded profile config "functional-470148": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0812 10:34:22.425647   21347 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0812 10:34:22.425693   21347 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:34:22.441030   21347 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44951
	I0812 10:34:22.441530   21347 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:34:22.442313   21347 main.go:141] libmachine: Using API Version  1
	I0812 10:34:22.442344   21347 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:34:22.442664   21347 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:34:22.442846   21347 main.go:141] libmachine: (functional-470148) Calling .DriverName
	I0812 10:34:22.443151   21347 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 10:34:22.443579   21347 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0812 10:34:22.443633   21347 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:34:22.458622   21347 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39453
	I0812 10:34:22.459053   21347 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:34:22.459617   21347 main.go:141] libmachine: Using API Version  1
	I0812 10:34:22.459646   21347 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:34:22.459961   21347 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:34:22.460160   21347 main.go:141] libmachine: (functional-470148) Calling .DriverName
	I0812 10:34:22.494896   21347 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0812 10:34:22.496455   21347 start.go:297] selected driver: kvm2
	I0812 10:34:22.496479   21347 start.go:901] validating driver "kvm2" against &{Name:functional-470148 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:functional-470148 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpir
ation:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 10:34:22.496659   21347 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0812 10:34:22.499455   21347 out.go:177] 
	W0812 10:34:22.501141   21347 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0812 10:34:22.502797   21347 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-470148 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-470148 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-cm9lx" [1af52650-fc36-4ba4-b802-89ae0a3fd50b] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-cm9lx" [1af52650-fc36-4ba4-b802-89ae0a3fd50b] Running
2024/08/12 10:34:26 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.005673406s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.217:31855
functional_test.go:1675: http://192.168.39.217:31855: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-cm9lx

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.217:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.217:31855
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.58s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (48.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [6401a106-0623-4d76-a310-52113a158364] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.005341412s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-470148 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-470148 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-470148 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-470148 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-470148 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [f3845608-cfdd-4871-a6df-6071cbcc3abc] Pending
helpers_test.go:344: "sp-pod" [f3845608-cfdd-4871-a6df-6071cbcc3abc] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [f3845608-cfdd-4871-a6df-6071cbcc3abc] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 27.00422736s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-470148 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-470148 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-470148 delete -f testdata/storage-provisioner/pod.yaml: (1.132645897s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-470148 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [649f424b-5c6a-4325-b63c-6e43ac7709f7] Pending
helpers_test.go:344: "sp-pod" [649f424b-5c6a-4325-b63c-6e43ac7709f7] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [649f424b-5c6a-4325-b63c-6e43ac7709f7] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.005597166s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-470148 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (48.19s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 ssh -n functional-470148 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 cp functional-470148:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1120856169/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 ssh -n functional-470148 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 ssh -n functional-470148 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.44s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (37.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-470148 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-7qvld" [cde89f6c-cbbe-4b99-9914-cbd425c78cab] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-7qvld" [cde89f6c-cbbe-4b99-9914-cbd425c78cab] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 28.009664653s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-470148 exec mysql-64454c8b5c-7qvld -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-470148 exec mysql-64454c8b5c-7qvld -- mysql -ppassword -e "show databases;": exit status 1 (276.601863ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-470148 exec mysql-64454c8b5c-7qvld -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-470148 exec mysql-64454c8b5c-7qvld -- mysql -ppassword -e "show databases;": exit status 1 (184.448285ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-470148 exec mysql-64454c8b5c-7qvld -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-470148 exec mysql-64454c8b5c-7qvld -- mysql -ppassword -e "show databases;": exit status 1 (182.566647ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-470148 exec mysql-64454c8b5c-7qvld -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-470148 exec mysql-64454c8b5c-7qvld -- mysql -ppassword -e "show databases;": exit status 1 (365.888352ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-470148 exec mysql-64454c8b5c-7qvld -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (37.35s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/10968/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 ssh "sudo cat /etc/test/nested/copy/10968/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/10968.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 ssh "sudo cat /etc/ssl/certs/10968.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/10968.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 ssh "sudo cat /usr/share/ca-certificates/10968.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/109682.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 ssh "sudo cat /etc/ssl/certs/109682.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/109682.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 ssh "sudo cat /usr/share/ca-certificates/109682.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-470148 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-470148 ssh "sudo systemctl is-active crio": exit status 1 (250.9072ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (12.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-470148 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-470148 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-7lbf8" [49bf4c9f-f4b3-4f0d-9283-6034cf4f5bc8] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-7lbf8" [49bf4c9f-f4b3-4f0d-9283-6034cf4f5bc8] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 12.004480457s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (12.25s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-470148 docker-env) && out/minikube-linux-amd64 status -p functional-470148"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-470148 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-470148 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-470148
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-470148
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-470148 image ls --format short --alsologtostderr:
I0812 10:34:24.070192   21543 out.go:291] Setting OutFile to fd 1 ...
I0812 10:34:24.070463   21543 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0812 10:34:24.070472   21543 out.go:304] Setting ErrFile to fd 2...
I0812 10:34:24.070476   21543 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0812 10:34:24.070673   21543 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19409-3796/.minikube/bin
I0812 10:34:24.071247   21543 config.go:182] Loaded profile config "functional-470148": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0812 10:34:24.071345   21543 config.go:182] Loaded profile config "functional-470148": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0812 10:34:24.071760   21543 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0812 10:34:24.071802   21543 main.go:141] libmachine: Launching plugin server for driver kvm2
I0812 10:34:24.087910   21543 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42295
I0812 10:34:24.088502   21543 main.go:141] libmachine: () Calling .GetVersion
I0812 10:34:24.089158   21543 main.go:141] libmachine: Using API Version  1
I0812 10:34:24.089189   21543 main.go:141] libmachine: () Calling .SetConfigRaw
I0812 10:34:24.089645   21543 main.go:141] libmachine: () Calling .GetMachineName
I0812 10:34:24.089900   21543 main.go:141] libmachine: (functional-470148) Calling .GetState
I0812 10:34:24.092084   21543 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0812 10:34:24.092128   21543 main.go:141] libmachine: Launching plugin server for driver kvm2
I0812 10:34:24.107719   21543 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35871
I0812 10:34:24.108285   21543 main.go:141] libmachine: () Calling .GetVersion
I0812 10:34:24.108882   21543 main.go:141] libmachine: Using API Version  1
I0812 10:34:24.108904   21543 main.go:141] libmachine: () Calling .SetConfigRaw
I0812 10:34:24.109277   21543 main.go:141] libmachine: () Calling .GetMachineName
I0812 10:34:24.109474   21543 main.go:141] libmachine: (functional-470148) Calling .DriverName
I0812 10:34:24.109697   21543 ssh_runner.go:195] Run: systemctl --version
I0812 10:34:24.109726   21543 main.go:141] libmachine: (functional-470148) Calling .GetSSHHostname
I0812 10:34:24.113284   21543 main.go:141] libmachine: (functional-470148) DBG | domain functional-470148 has defined MAC address 52:54:00:f9:38:04 in network mk-functional-470148
I0812 10:34:24.113709   21543 main.go:141] libmachine: (functional-470148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:38:04", ip: ""} in network mk-functional-470148: {Iface:virbr1 ExpiryTime:2024-08-12 11:29:27 +0000 UTC Type:0 Mac:52:54:00:f9:38:04 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:functional-470148 Clientid:01:52:54:00:f9:38:04}
I0812 10:34:24.113746   21543 main.go:141] libmachine: (functional-470148) DBG | domain functional-470148 has defined IP address 192.168.39.217 and MAC address 52:54:00:f9:38:04 in network mk-functional-470148
I0812 10:34:24.113900   21543 main.go:141] libmachine: (functional-470148) Calling .GetSSHPort
I0812 10:34:24.114099   21543 main.go:141] libmachine: (functional-470148) Calling .GetSSHKeyPath
I0812 10:34:24.114249   21543 main.go:141] libmachine: (functional-470148) Calling .GetSSHUsername
I0812 10:34:24.114395   21543 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3796/.minikube/machines/functional-470148/id_rsa Username:docker}
I0812 10:34:24.208056   21543 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0812 10:34:24.234379   21543 main.go:141] libmachine: Making call to close driver server
I0812 10:34:24.234393   21543 main.go:141] libmachine: (functional-470148) Calling .Close
I0812 10:34:24.234713   21543 main.go:141] libmachine: Successfully made call to close driver server
I0812 10:34:24.234736   21543 main.go:141] libmachine: Making call to close connection to plugin binary
I0812 10:34:24.234745   21543 main.go:141] libmachine: Making call to close driver server
I0812 10:34:24.234753   21543 main.go:141] libmachine: (functional-470148) Calling .Close
I0812 10:34:24.234970   21543 main.go:141] libmachine: Successfully made call to close driver server
I0812 10:34:24.234987   21543 main.go:141] libmachine: Making call to close connection to plugin binary
I0812 10:34:24.235008   21543 main.go:141] libmachine: (functional-470148) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-470148 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/kicbase/echo-server               | functional-470148 | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/kube-proxy                  | v1.30.3           | 55bb025d2cfa5 | 84.7MB |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| docker.io/library/minikube-local-cache-test | functional-470148 | b64c7c3d7946c | 30B    |
| registry.k8s.io/kube-scheduler              | v1.30.3           | 3edc18e7b7672 | 62MB   |
| registry.k8s.io/kube-controller-manager     | v1.30.3           | 76932a3b37d7e | 111MB  |
| registry.k8s.io/etcd                        | 3.5.12-0          | 3861cfcd7c04c | 149MB  |
| registry.k8s.io/coredns/coredns             | v1.11.1           | cbb01a7bd410d | 59.8MB |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/kube-apiserver              | v1.30.3           | 1f6d574d502f3 | 117MB  |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| docker.io/library/nginx                     | latest            | a72860cb95fd5 | 188MB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-470148 image ls --format table --alsologtostderr:
I0812 10:34:26.756745   21684 out.go:291] Setting OutFile to fd 1 ...
I0812 10:34:26.757204   21684 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0812 10:34:26.757260   21684 out.go:304] Setting ErrFile to fd 2...
I0812 10:34:26.757277   21684 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0812 10:34:26.757751   21684 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19409-3796/.minikube/bin
I0812 10:34:26.758903   21684 config.go:182] Loaded profile config "functional-470148": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0812 10:34:26.758997   21684 config.go:182] Loaded profile config "functional-470148": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0812 10:34:26.759358   21684 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0812 10:34:26.759408   21684 main.go:141] libmachine: Launching plugin server for driver kvm2
I0812 10:34:26.774616   21684 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41863
I0812 10:34:26.775122   21684 main.go:141] libmachine: () Calling .GetVersion
I0812 10:34:26.775748   21684 main.go:141] libmachine: Using API Version  1
I0812 10:34:26.775783   21684 main.go:141] libmachine: () Calling .SetConfigRaw
I0812 10:34:26.776105   21684 main.go:141] libmachine: () Calling .GetMachineName
I0812 10:34:26.776289   21684 main.go:141] libmachine: (functional-470148) Calling .GetState
I0812 10:34:26.778273   21684 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0812 10:34:26.778317   21684 main.go:141] libmachine: Launching plugin server for driver kvm2
I0812 10:34:26.793147   21684 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34971
I0812 10:34:26.793576   21684 main.go:141] libmachine: () Calling .GetVersion
I0812 10:34:26.794164   21684 main.go:141] libmachine: Using API Version  1
I0812 10:34:26.794197   21684 main.go:141] libmachine: () Calling .SetConfigRaw
I0812 10:34:26.794583   21684 main.go:141] libmachine: () Calling .GetMachineName
I0812 10:34:26.794822   21684 main.go:141] libmachine: (functional-470148) Calling .DriverName
I0812 10:34:26.795031   21684 ssh_runner.go:195] Run: systemctl --version
I0812 10:34:26.795061   21684 main.go:141] libmachine: (functional-470148) Calling .GetSSHHostname
I0812 10:34:26.798141   21684 main.go:141] libmachine: (functional-470148) DBG | domain functional-470148 has defined MAC address 52:54:00:f9:38:04 in network mk-functional-470148
I0812 10:34:26.798703   21684 main.go:141] libmachine: (functional-470148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:38:04", ip: ""} in network mk-functional-470148: {Iface:virbr1 ExpiryTime:2024-08-12 11:29:27 +0000 UTC Type:0 Mac:52:54:00:f9:38:04 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:functional-470148 Clientid:01:52:54:00:f9:38:04}
I0812 10:34:26.798734   21684 main.go:141] libmachine: (functional-470148) DBG | domain functional-470148 has defined IP address 192.168.39.217 and MAC address 52:54:00:f9:38:04 in network mk-functional-470148
I0812 10:34:26.798939   21684 main.go:141] libmachine: (functional-470148) Calling .GetSSHPort
I0812 10:34:26.799122   21684 main.go:141] libmachine: (functional-470148) Calling .GetSSHKeyPath
I0812 10:34:26.799264   21684 main.go:141] libmachine: (functional-470148) Calling .GetSSHUsername
I0812 10:34:26.799443   21684 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3796/.minikube/machines/functional-470148/id_rsa Username:docker}
I0812 10:34:26.883409   21684 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0812 10:34:26.916549   21684 main.go:141] libmachine: Making call to close driver server
I0812 10:34:26.916570   21684 main.go:141] libmachine: (functional-470148) Calling .Close
I0812 10:34:26.916871   21684 main.go:141] libmachine: Successfully made call to close driver server
I0812 10:34:26.916899   21684 main.go:141] libmachine: Making call to close connection to plugin binary
I0812 10:34:26.916911   21684 main.go:141] libmachine: (functional-470148) DBG | Closing plugin on server side
I0812 10:34:26.916915   21684 main.go:141] libmachine: Making call to close driver server
I0812 10:34:26.916928   21684 main.go:141] libmachine: (functional-470148) Calling .Close
I0812 10:34:26.917203   21684 main.go:141] libmachine: (functional-470148) DBG | Closing plugin on server side
I0812 10:34:26.917174   21684 main.go:141] libmachine: Successfully made call to close driver server
I0812 10:34:26.917247   21684 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-470148 image ls --format json --alsologtostderr:
[{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.3"],"size":"117000000"},{"id":"55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.30.3"],"size":"84700000"},{"id":"a72860cb95fd59e9c696c66441c64f18e66915fa26b249911e83c3854477ed9a","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2","repoDigests":[],"r
epoTags":["registry.k8s.io/kube-scheduler:v1.30.3"],"size":"62000000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-470148"],"size":"4940000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"b64c7c3d7946c6c37667b1e9db0088bce54e0763145426ab1b740a9235cf6138","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-470148"],"size":"30"},{"id":"76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.3"],"size":"111000000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],
"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"149000000"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"59800000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"
31500000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-470148 image ls --format json --alsologtostderr:
I0812 10:34:26.539189   21661 out.go:291] Setting OutFile to fd 1 ...
I0812 10:34:26.539320   21661 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0812 10:34:26.539331   21661 out.go:304] Setting ErrFile to fd 2...
I0812 10:34:26.539337   21661 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0812 10:34:26.539565   21661 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19409-3796/.minikube/bin
I0812 10:34:26.540173   21661 config.go:182] Loaded profile config "functional-470148": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0812 10:34:26.540353   21661 config.go:182] Loaded profile config "functional-470148": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0812 10:34:26.540856   21661 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0812 10:34:26.540910   21661 main.go:141] libmachine: Launching plugin server for driver kvm2
I0812 10:34:26.557071   21661 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46135
I0812 10:34:26.557655   21661 main.go:141] libmachine: () Calling .GetVersion
I0812 10:34:26.558319   21661 main.go:141] libmachine: Using API Version  1
I0812 10:34:26.558345   21661 main.go:141] libmachine: () Calling .SetConfigRaw
I0812 10:34:26.558814   21661 main.go:141] libmachine: () Calling .GetMachineName
I0812 10:34:26.559040   21661 main.go:141] libmachine: (functional-470148) Calling .GetState
I0812 10:34:26.561233   21661 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0812 10:34:26.561279   21661 main.go:141] libmachine: Launching plugin server for driver kvm2
I0812 10:34:26.578366   21661 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43175
I0812 10:34:26.578843   21661 main.go:141] libmachine: () Calling .GetVersion
I0812 10:34:26.579318   21661 main.go:141] libmachine: Using API Version  1
I0812 10:34:26.579342   21661 main.go:141] libmachine: () Calling .SetConfigRaw
I0812 10:34:26.579744   21661 main.go:141] libmachine: () Calling .GetMachineName
I0812 10:34:26.579961   21661 main.go:141] libmachine: (functional-470148) Calling .DriverName
I0812 10:34:26.580176   21661 ssh_runner.go:195] Run: systemctl --version
I0812 10:34:26.580215   21661 main.go:141] libmachine: (functional-470148) Calling .GetSSHHostname
I0812 10:34:26.583767   21661 main.go:141] libmachine: (functional-470148) DBG | domain functional-470148 has defined MAC address 52:54:00:f9:38:04 in network mk-functional-470148
I0812 10:34:26.584264   21661 main.go:141] libmachine: (functional-470148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:38:04", ip: ""} in network mk-functional-470148: {Iface:virbr1 ExpiryTime:2024-08-12 11:29:27 +0000 UTC Type:0 Mac:52:54:00:f9:38:04 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:functional-470148 Clientid:01:52:54:00:f9:38:04}
I0812 10:34:26.584297   21661 main.go:141] libmachine: (functional-470148) DBG | domain functional-470148 has defined IP address 192.168.39.217 and MAC address 52:54:00:f9:38:04 in network mk-functional-470148
I0812 10:34:26.584457   21661 main.go:141] libmachine: (functional-470148) Calling .GetSSHPort
I0812 10:34:26.584650   21661 main.go:141] libmachine: (functional-470148) Calling .GetSSHKeyPath
I0812 10:34:26.584830   21661 main.go:141] libmachine: (functional-470148) Calling .GetSSHUsername
I0812 10:34:26.585019   21661 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3796/.minikube/machines/functional-470148/id_rsa Username:docker}
I0812 10:34:26.675199   21661 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0812 10:34:26.707308   21661 main.go:141] libmachine: Making call to close driver server
I0812 10:34:26.707331   21661 main.go:141] libmachine: (functional-470148) Calling .Close
I0812 10:34:26.707634   21661 main.go:141] libmachine: Successfully made call to close driver server
I0812 10:34:26.707664   21661 main.go:141] libmachine: Making call to close connection to plugin binary
I0812 10:34:26.707675   21661 main.go:141] libmachine: Making call to close driver server
I0812 10:34:26.707684   21661 main.go:141] libmachine: (functional-470148) Calling .Close
I0812 10:34:26.707960   21661 main.go:141] libmachine: (functional-470148) DBG | Closing plugin on server side
I0812 10:34:26.707994   21661 main.go:141] libmachine: Successfully made call to close driver server
I0812 10:34:26.708006   21661 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-470148 image ls --format yaml --alsologtostderr:
- id: 76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "111000000"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "59800000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "84700000"
- id: a72860cb95fd59e9c696c66441c64f18e66915fa26b249911e83c3854477ed9a
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "117000000"
- id: 3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "62000000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-470148
size: "4940000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: b64c7c3d7946c6c37667b1e9db0088bce54e0763145426ab1b740a9235cf6138
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-470148
size: "30"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "149000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-470148 image ls --format yaml --alsologtostderr:
I0812 10:34:24.288206   21566 out.go:291] Setting OutFile to fd 1 ...
I0812 10:34:24.288346   21566 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0812 10:34:24.288352   21566 out.go:304] Setting ErrFile to fd 2...
I0812 10:34:24.288358   21566 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0812 10:34:24.288695   21566 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19409-3796/.minikube/bin
I0812 10:34:24.289404   21566 config.go:182] Loaded profile config "functional-470148": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0812 10:34:24.289503   21566 config.go:182] Loaded profile config "functional-470148": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0812 10:34:24.289872   21566 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0812 10:34:24.289927   21566 main.go:141] libmachine: Launching plugin server for driver kvm2
I0812 10:34:24.308393   21566 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40633
I0812 10:34:24.308863   21566 main.go:141] libmachine: () Calling .GetVersion
I0812 10:34:24.309736   21566 main.go:141] libmachine: Using API Version  1
I0812 10:34:24.309771   21566 main.go:141] libmachine: () Calling .SetConfigRaw
I0812 10:34:24.310223   21566 main.go:141] libmachine: () Calling .GetMachineName
I0812 10:34:24.310474   21566 main.go:141] libmachine: (functional-470148) Calling .GetState
I0812 10:34:24.312994   21566 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0812 10:34:24.313053   21566 main.go:141] libmachine: Launching plugin server for driver kvm2
I0812 10:34:24.330419   21566 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45031
I0812 10:34:24.331060   21566 main.go:141] libmachine: () Calling .GetVersion
I0812 10:34:24.331600   21566 main.go:141] libmachine: Using API Version  1
I0812 10:34:24.331626   21566 main.go:141] libmachine: () Calling .SetConfigRaw
I0812 10:34:24.332061   21566 main.go:141] libmachine: () Calling .GetMachineName
I0812 10:34:24.332263   21566 main.go:141] libmachine: (functional-470148) Calling .DriverName
I0812 10:34:24.332486   21566 ssh_runner.go:195] Run: systemctl --version
I0812 10:34:24.332517   21566 main.go:141] libmachine: (functional-470148) Calling .GetSSHHostname
I0812 10:34:24.336408   21566 main.go:141] libmachine: (functional-470148) DBG | domain functional-470148 has defined MAC address 52:54:00:f9:38:04 in network mk-functional-470148
I0812 10:34:24.337053   21566 main.go:141] libmachine: (functional-470148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:38:04", ip: ""} in network mk-functional-470148: {Iface:virbr1 ExpiryTime:2024-08-12 11:29:27 +0000 UTC Type:0 Mac:52:54:00:f9:38:04 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:functional-470148 Clientid:01:52:54:00:f9:38:04}
I0812 10:34:24.337100   21566 main.go:141] libmachine: (functional-470148) DBG | domain functional-470148 has defined IP address 192.168.39.217 and MAC address 52:54:00:f9:38:04 in network mk-functional-470148
I0812 10:34:24.337322   21566 main.go:141] libmachine: (functional-470148) Calling .GetSSHPort
I0812 10:34:24.337706   21566 main.go:141] libmachine: (functional-470148) Calling .GetSSHKeyPath
I0812 10:34:24.338103   21566 main.go:141] libmachine: (functional-470148) Calling .GetSSHUsername
I0812 10:34:24.338370   21566 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3796/.minikube/machines/functional-470148/id_rsa Username:docker}
I0812 10:34:24.429297   21566 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0812 10:34:24.469565   21566 main.go:141] libmachine: Making call to close driver server
I0812 10:34:24.469582   21566 main.go:141] libmachine: (functional-470148) Calling .Close
I0812 10:34:24.469886   21566 main.go:141] libmachine: (functional-470148) DBG | Closing plugin on server side
I0812 10:34:24.469961   21566 main.go:141] libmachine: Successfully made call to close driver server
I0812 10:34:24.469982   21566 main.go:141] libmachine: Making call to close connection to plugin binary
I0812 10:34:24.469999   21566 main.go:141] libmachine: Making call to close driver server
I0812 10:34:24.470010   21566 main.go:141] libmachine: (functional-470148) Calling .Close
I0812 10:34:24.470308   21566 main.go:141] libmachine: (functional-470148) DBG | Closing plugin on server side
I0812 10:34:24.470400   21566 main.go:141] libmachine: Successfully made call to close driver server
I0812 10:34:24.470449   21566 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-470148 ssh pgrep buildkitd: exit status 1 (205.44765ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 image build -t localhost/my-image:functional-470148 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-470148 image build -t localhost/my-image:functional-470148 testdata/build --alsologtostderr: (2.760919068s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-470148 image build -t localhost/my-image:functional-470148 testdata/build --alsologtostderr:
I0812 10:34:24.732156   21620 out.go:291] Setting OutFile to fd 1 ...
I0812 10:34:24.732560   21620 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0812 10:34:24.732571   21620 out.go:304] Setting ErrFile to fd 2...
I0812 10:34:24.732576   21620 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0812 10:34:24.732757   21620 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19409-3796/.minikube/bin
I0812 10:34:24.733345   21620 config.go:182] Loaded profile config "functional-470148": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0812 10:34:24.734302   21620 config.go:182] Loaded profile config "functional-470148": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0812 10:34:24.734680   21620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0812 10:34:24.734748   21620 main.go:141] libmachine: Launching plugin server for driver kvm2
I0812 10:34:24.749894   21620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36527
I0812 10:34:24.750455   21620 main.go:141] libmachine: () Calling .GetVersion
I0812 10:34:24.751022   21620 main.go:141] libmachine: Using API Version  1
I0812 10:34:24.751047   21620 main.go:141] libmachine: () Calling .SetConfigRaw
I0812 10:34:24.751417   21620 main.go:141] libmachine: () Calling .GetMachineName
I0812 10:34:24.751618   21620 main.go:141] libmachine: (functional-470148) Calling .GetState
I0812 10:34:24.753768   21620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0812 10:34:24.753817   21620 main.go:141] libmachine: Launching plugin server for driver kvm2
I0812 10:34:24.769155   21620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46835
I0812 10:34:24.769670   21620 main.go:141] libmachine: () Calling .GetVersion
I0812 10:34:24.770201   21620 main.go:141] libmachine: Using API Version  1
I0812 10:34:24.770232   21620 main.go:141] libmachine: () Calling .SetConfigRaw
I0812 10:34:24.770661   21620 main.go:141] libmachine: () Calling .GetMachineName
I0812 10:34:24.770951   21620 main.go:141] libmachine: (functional-470148) Calling .DriverName
I0812 10:34:24.771289   21620 ssh_runner.go:195] Run: systemctl --version
I0812 10:34:24.771335   21620 main.go:141] libmachine: (functional-470148) Calling .GetSSHHostname
I0812 10:34:24.774832   21620 main.go:141] libmachine: (functional-470148) DBG | domain functional-470148 has defined MAC address 52:54:00:f9:38:04 in network mk-functional-470148
I0812 10:34:24.775257   21620 main.go:141] libmachine: (functional-470148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:38:04", ip: ""} in network mk-functional-470148: {Iface:virbr1 ExpiryTime:2024-08-12 11:29:27 +0000 UTC Type:0 Mac:52:54:00:f9:38:04 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:functional-470148 Clientid:01:52:54:00:f9:38:04}
I0812 10:34:24.775286   21620 main.go:141] libmachine: (functional-470148) DBG | domain functional-470148 has defined IP address 192.168.39.217 and MAC address 52:54:00:f9:38:04 in network mk-functional-470148
I0812 10:34:24.775429   21620 main.go:141] libmachine: (functional-470148) Calling .GetSSHPort
I0812 10:34:24.775658   21620 main.go:141] libmachine: (functional-470148) Calling .GetSSHKeyPath
I0812 10:34:24.775821   21620 main.go:141] libmachine: (functional-470148) Calling .GetSSHUsername
I0812 10:34:24.775971   21620 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3796/.minikube/machines/functional-470148/id_rsa Username:docker}
I0812 10:34:24.861927   21620 build_images.go:161] Building image from path: /tmp/build.3433639834.tar
I0812 10:34:24.862011   21620 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0812 10:34:24.875747   21620 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3433639834.tar
I0812 10:34:24.886194   21620 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3433639834.tar: stat -c "%s %y" /var/lib/minikube/build/build.3433639834.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3433639834.tar': No such file or directory
I0812 10:34:24.886229   21620 ssh_runner.go:362] scp /tmp/build.3433639834.tar --> /var/lib/minikube/build/build.3433639834.tar (3072 bytes)
I0812 10:34:24.923466   21620 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3433639834
I0812 10:34:24.940612   21620 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3433639834 -xf /var/lib/minikube/build/build.3433639834.tar
I0812 10:34:24.953079   21620 docker.go:360] Building image: /var/lib/minikube/build/build.3433639834
I0812 10:34:24.953167   21620 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-470148 /var/lib/minikube/build/build.3433639834
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile:
#1 transferring dockerfile: 97B 0.0s done
#1 DONE 0.2s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.1s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.2s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 writing image sha256:cc8a85c425e9fae8d2aa887592514e91bee1df742b85ef01f1f73bf9c0038809
#8 writing image sha256:cc8a85c425e9fae8d2aa887592514e91bee1df742b85ef01f1f73bf9c0038809 done
#8 naming to localhost/my-image:functional-470148 0.0s done
#8 DONE 0.1s
I0812 10:34:27.403307   21620 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-470148 /var/lib/minikube/build/build.3433639834: (2.450111097s)
I0812 10:34:27.403395   21620 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3433639834
I0812 10:34:27.423719   21620 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3433639834.tar
I0812 10:34:27.435927   21620 build_images.go:217] Built localhost/my-image:functional-470148 from /tmp/build.3433639834.tar
I0812 10:34:27.435969   21620 build_images.go:133] succeeded building to: functional-470148
I0812 10:34:27.435975   21620 build_images.go:134] failed building to: 
I0812 10:34:27.436030   21620 main.go:141] libmachine: Making call to close driver server
I0812 10:34:27.436053   21620 main.go:141] libmachine: (functional-470148) Calling .Close
I0812 10:34:27.436346   21620 main.go:141] libmachine: Successfully made call to close driver server
I0812 10:34:27.436369   21620 main.go:141] libmachine: Making call to close connection to plugin binary
I0812 10:34:27.436369   21620 main.go:141] libmachine: (functional-470148) DBG | Closing plugin on server side
I0812 10:34:27.436383   21620 main.go:141] libmachine: Making call to close driver server
I0812 10:34:27.436391   21620 main.go:141] libmachine: (functional-470148) Calling .Close
I0812 10:34:27.436617   21620 main.go:141] libmachine: Successfully made call to close driver server
I0812 10:34:27.436646   21620 main.go:141] libmachine: Making call to close connection to plugin binary
I0812 10:34:27.436667   21620 main.go:141] libmachine: (functional-470148) DBG | Closing plugin on server side
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.570371893s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-470148
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 image load --daemon kicbase/echo-server:functional-470148 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-470148 image load --daemon kicbase/echo-server:functional-470148 --alsologtostderr: (1.125523319s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 image load --daemon kicbase/echo-server:functional-470148 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-470148
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 image load --daemon kicbase/echo-server:functional-470148 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 image save kicbase/echo-server:functional-470148 /home/jenkins/workspace/KVM_Linux_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 image rm kicbase/echo-server:functional-470148 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 image load /home/jenkins/workspace/KVM_Linux_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-470148
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 image save --daemon kicbase/echo-server:functional-470148 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-470148
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "385.924133ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "50.335386ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "246.206595ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "48.616421ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (24.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-470148 /tmp/TestFunctionalparallelMountCmdany-port43403757/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1723458834342412950" to /tmp/TestFunctionalparallelMountCmdany-port43403757/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1723458834342412950" to /tmp/TestFunctionalparallelMountCmdany-port43403757/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1723458834342412950" to /tmp/TestFunctionalparallelMountCmdany-port43403757/001/test-1723458834342412950
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-470148 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (237.858401ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 12 10:33 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 12 10:33 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 12 10:33 test-1723458834342412950
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 ssh cat /mount-9p/test-1723458834342412950
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-470148 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [3c9c65db-1995-472e-9ada-7e0a7f909cd1] Pending
helpers_test.go:344: "busybox-mount" [3c9c65db-1995-472e-9ada-7e0a7f909cd1] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [3c9c65db-1995-472e-9ada-7e0a7f909cd1] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [3c9c65db-1995-472e-9ada-7e0a7f909cd1] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 22.004747791s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-470148 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-470148 /tmp/TestFunctionalparallelMountCmdany-port43403757/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (24.80s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 service list -o json
functional_test.go:1494: Took "272.509573ms" to run "out/minikube-linux-amd64 -p functional-470148 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.217:31114
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.217:31114
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-470148 /tmp/TestFunctionalparallelMountCmdspecific-port3041660215/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-470148 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (226.420441ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-470148 /tmp/TestFunctionalparallelMountCmdspecific-port3041660215/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-470148 ssh "sudo umount -f /mount-9p": exit status 1 (230.276478ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-470148 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-470148 /tmp/TestFunctionalparallelMountCmdspecific-port3041660215/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.74s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-470148 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2451436995/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-470148 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2451436995/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-470148 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2451436995/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-470148 ssh "findmnt -T" /mount1: exit status 1 (265.159935ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-470148 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-470148 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-470148 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2451436995/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-470148 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2451436995/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-470148 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2451436995/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.48s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-470148
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-470148
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-470148
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestGvisorAddon (197.15s)

                                                
                                                
=== RUN   TestGvisorAddon
=== PAUSE TestGvisorAddon

                                                
                                                

                                                
                                                
=== CONT  TestGvisorAddon
gvisor_addon_test.go:52: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-135153 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:52: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-135153 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (1m9.250580901s)
gvisor_addon_test.go:58: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-135153 cache add gcr.io/k8s-minikube/gvisor-addon:2
E0812 11:21:53.282521   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/skaffold-628488/client.crt: no such file or directory
E0812 11:22:03.523544   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/skaffold-628488/client.crt: no such file or directory
gvisor_addon_test.go:58: (dbg) Done: out/minikube-linux-amd64 -p gvisor-135153 cache add gcr.io/k8s-minikube/gvisor-addon:2: (25.917933921s)
gvisor_addon_test.go:63: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-135153 addons enable gvisor
gvisor_addon_test.go:63: (dbg) Done: out/minikube-linux-amd64 -p gvisor-135153 addons enable gvisor: (4.308698768s)
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [2a1d4ab5-3010-4ae8-9c79-2a28351ad512] Running
E0812 11:22:24.004089   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/skaffold-628488/client.crt: no such file or directory
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 6.007982891s
gvisor_addon_test.go:73: (dbg) Run:  kubectl --context gvisor-135153 replace --force -f testdata/nginx-gvisor.yaml
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [41c91cfe-4eea-4702-8856-373823acb530] Pending
helpers_test.go:344: "nginx-gvisor" [41c91cfe-4eea-4702-8856-373823acb530] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-gvisor" [41c91cfe-4eea-4702-8856-373823acb530] Running
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 15.004146149s
gvisor_addon_test.go:83: (dbg) Run:  out/minikube-linux-amd64 stop -p gvisor-135153
gvisor_addon_test.go:83: (dbg) Done: out/minikube-linux-amd64 stop -p gvisor-135153: (2.319721501s)
gvisor_addon_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-135153 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-135153 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (1m1.946359046s)
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [2a1d4ab5-3010-4ae8-9c79-2a28351ad512] Running / Ready:ContainersNotReady (containers with unready status: [gvisor]) / ContainersReady:ContainersNotReady (containers with unready status: [gvisor])
helpers_test.go:344: "gvisor" [2a1d4ab5-3010-4ae8-9c79-2a28351ad512] Running
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 6.005065453s
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [41c91cfe-4eea-4702-8856-373823acb530] Running / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 5.00457098s
helpers_test.go:175: Cleaning up "gvisor-135153" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p gvisor-135153
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p gvisor-135153: (1.169628623s)
--- PASS: TestGvisorAddon (197.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (244.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-766221 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2 
E0812 10:35:16.594632   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/addons-705597/client.crt: no such file or directory
E0812 10:35:44.279645   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/addons-705597/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-766221 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2 : (4m4.179988947s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-766221 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (244.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-766221 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-766221 -- rollout status deployment/busybox
E0812 10:38:43.742574   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/functional-470148/client.crt: no such file or directory
E0812 10:38:43.748062   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/functional-470148/client.crt: no such file or directory
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-766221 -- rollout status deployment/busybox: (3.048444667s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-766221 -- get pods -o jsonpath='{.items[*].status.podIP}'
E0812 10:38:43.758537   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/functional-470148/client.crt: no such file or directory
E0812 10:38:43.778737   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/functional-470148/client.crt: no such file or directory
E0812 10:38:43.819083   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/functional-470148/client.crt: no such file or directory
E0812 10:38:43.899384   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/functional-470148/client.crt: no such file or directory
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-766221 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-766221 -- exec busybox-fc5497c4f-b2hdt -- nslookup kubernetes.io
E0812 10:38:44.060394   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/functional-470148/client.crt: no such file or directory
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-766221 -- exec busybox-fc5497c4f-bwpm2 -- nslookup kubernetes.io
E0812 10:38:44.380529   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/functional-470148/client.crt: no such file or directory
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-766221 -- exec busybox-fc5497c4f-rcgq2 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-766221 -- exec busybox-fc5497c4f-b2hdt -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-766221 -- exec busybox-fc5497c4f-bwpm2 -- nslookup kubernetes.default
E0812 10:38:45.020836   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/functional-470148/client.crt: no such file or directory
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-766221 -- exec busybox-fc5497c4f-rcgq2 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-766221 -- exec busybox-fc5497c4f-b2hdt -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-766221 -- exec busybox-fc5497c4f-bwpm2 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-766221 -- exec busybox-fc5497c4f-rcgq2 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-766221 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-766221 -- exec busybox-fc5497c4f-b2hdt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
E0812 10:38:46.301006   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/functional-470148/client.crt: no such file or directory
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-766221 -- exec busybox-fc5497c4f-b2hdt -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-766221 -- exec busybox-fc5497c4f-bwpm2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-766221 -- exec busybox-fc5497c4f-bwpm2 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-766221 -- exec busybox-fc5497c4f-rcgq2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-766221 -- exec busybox-fc5497c4f-rcgq2 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (69.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-766221 -v=7 --alsologtostderr
E0812 10:38:48.862156   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/functional-470148/client.crt: no such file or directory
E0812 10:38:53.983248   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/functional-470148/client.crt: no such file or directory
E0812 10:39:04.223853   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/functional-470148/client.crt: no such file or directory
E0812 10:39:24.704382   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/functional-470148/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-766221 -v=7 --alsologtostderr: (1m8.421595496s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-766221 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (69.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-766221 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (14.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-766221 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-766221 cp testdata/cp-test.txt ha-766221:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-766221 ssh -n ha-766221 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-766221 cp ha-766221:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2902724691/001/cp-test_ha-766221.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-766221 ssh -n ha-766221 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-766221 cp ha-766221:/home/docker/cp-test.txt ha-766221-m02:/home/docker/cp-test_ha-766221_ha-766221-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-766221 ssh -n ha-766221 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-766221 ssh -n ha-766221-m02 "sudo cat /home/docker/cp-test_ha-766221_ha-766221-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-766221 cp ha-766221:/home/docker/cp-test.txt ha-766221-m03:/home/docker/cp-test_ha-766221_ha-766221-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-766221 ssh -n ha-766221 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-766221 ssh -n ha-766221-m03 "sudo cat /home/docker/cp-test_ha-766221_ha-766221-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-766221 cp ha-766221:/home/docker/cp-test.txt ha-766221-m04:/home/docker/cp-test_ha-766221_ha-766221-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-766221 ssh -n ha-766221 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-766221 ssh -n ha-766221-m04 "sudo cat /home/docker/cp-test_ha-766221_ha-766221-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-766221 cp testdata/cp-test.txt ha-766221-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-766221 ssh -n ha-766221-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-766221 cp ha-766221-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2902724691/001/cp-test_ha-766221-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-766221 ssh -n ha-766221-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-766221 cp ha-766221-m02:/home/docker/cp-test.txt ha-766221:/home/docker/cp-test_ha-766221-m02_ha-766221.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-766221 ssh -n ha-766221-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-766221 ssh -n ha-766221 "sudo cat /home/docker/cp-test_ha-766221-m02_ha-766221.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-766221 cp ha-766221-m02:/home/docker/cp-test.txt ha-766221-m03:/home/docker/cp-test_ha-766221-m02_ha-766221-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-766221 ssh -n ha-766221-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-766221 ssh -n ha-766221-m03 "sudo cat /home/docker/cp-test_ha-766221-m02_ha-766221-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-766221 cp ha-766221-m02:/home/docker/cp-test.txt ha-766221-m04:/home/docker/cp-test_ha-766221-m02_ha-766221-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-766221 ssh -n ha-766221-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-766221 ssh -n ha-766221-m04 "sudo cat /home/docker/cp-test_ha-766221-m02_ha-766221-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-766221 cp testdata/cp-test.txt ha-766221-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-766221 ssh -n ha-766221-m03 "sudo cat /home/docker/cp-test.txt"
E0812 10:40:05.664848   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/functional-470148/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-766221 cp ha-766221-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2902724691/001/cp-test_ha-766221-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-766221 ssh -n ha-766221-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-766221 cp ha-766221-m03:/home/docker/cp-test.txt ha-766221:/home/docker/cp-test_ha-766221-m03_ha-766221.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-766221 ssh -n ha-766221-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-766221 ssh -n ha-766221 "sudo cat /home/docker/cp-test_ha-766221-m03_ha-766221.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-766221 cp ha-766221-m03:/home/docker/cp-test.txt ha-766221-m02:/home/docker/cp-test_ha-766221-m03_ha-766221-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-766221 ssh -n ha-766221-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-766221 ssh -n ha-766221-m02 "sudo cat /home/docker/cp-test_ha-766221-m03_ha-766221-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-766221 cp ha-766221-m03:/home/docker/cp-test.txt ha-766221-m04:/home/docker/cp-test_ha-766221-m03_ha-766221-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-766221 ssh -n ha-766221-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-766221 ssh -n ha-766221-m04 "sudo cat /home/docker/cp-test_ha-766221-m03_ha-766221-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-766221 cp testdata/cp-test.txt ha-766221-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-766221 ssh -n ha-766221-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-766221 cp ha-766221-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2902724691/001/cp-test_ha-766221-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-766221 ssh -n ha-766221-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-766221 cp ha-766221-m04:/home/docker/cp-test.txt ha-766221:/home/docker/cp-test_ha-766221-m04_ha-766221.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-766221 ssh -n ha-766221-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-766221 ssh -n ha-766221 "sudo cat /home/docker/cp-test_ha-766221-m04_ha-766221.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-766221 cp ha-766221-m04:/home/docker/cp-test.txt ha-766221-m02:/home/docker/cp-test_ha-766221-m04_ha-766221-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-766221 ssh -n ha-766221-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-766221 ssh -n ha-766221-m02 "sudo cat /home/docker/cp-test_ha-766221-m04_ha-766221-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-766221 cp ha-766221-m04:/home/docker/cp-test.txt ha-766221-m03:/home/docker/cp-test_ha-766221-m04_ha-766221-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-766221 ssh -n ha-766221-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-766221 ssh -n ha-766221-m03 "sudo cat /home/docker/cp-test_ha-766221-m04_ha-766221-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (14.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (14.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-766221 node stop m02 -v=7 --alsologtostderr
E0812 10:40:16.594536   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/addons-705597/client.crt: no such file or directory
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-766221 node stop m02 -v=7 --alsologtostderr: (13.34836413s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-766221 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-766221 status -v=7 --alsologtostderr: exit status 7 (702.607933ms)

                                                
                                                
-- stdout --
	ha-766221
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-766221-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-766221-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-766221-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 10:40:25.626688   26234 out.go:291] Setting OutFile to fd 1 ...
	I0812 10:40:25.626994   26234 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 10:40:25.627009   26234 out.go:304] Setting ErrFile to fd 2...
	I0812 10:40:25.627013   26234 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 10:40:25.627243   26234 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19409-3796/.minikube/bin
	I0812 10:40:25.627469   26234 out.go:298] Setting JSON to false
	I0812 10:40:25.627499   26234 mustload.go:65] Loading cluster: ha-766221
	I0812 10:40:25.627541   26234 notify.go:220] Checking for updates...
	I0812 10:40:25.627933   26234 config.go:182] Loaded profile config "ha-766221": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0812 10:40:25.627951   26234 status.go:255] checking status of ha-766221 ...
	I0812 10:40:25.628407   26234 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0812 10:40:25.628487   26234 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:40:25.644707   26234 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36459
	I0812 10:40:25.645134   26234 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:40:25.645733   26234 main.go:141] libmachine: Using API Version  1
	I0812 10:40:25.645756   26234 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:40:25.646211   26234 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:40:25.646472   26234 main.go:141] libmachine: (ha-766221) Calling .GetState
	I0812 10:40:25.648300   26234 status.go:330] ha-766221 host status = "Running" (err=<nil>)
	I0812 10:40:25.648322   26234 host.go:66] Checking if "ha-766221" exists ...
	I0812 10:40:25.648777   26234 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0812 10:40:25.648832   26234 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:40:25.664386   26234 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41683
	I0812 10:40:25.664867   26234 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:40:25.665355   26234 main.go:141] libmachine: Using API Version  1
	I0812 10:40:25.665377   26234 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:40:25.665695   26234 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:40:25.665935   26234 main.go:141] libmachine: (ha-766221) Calling .GetIP
	I0812 10:40:25.668969   26234 main.go:141] libmachine: (ha-766221) DBG | domain ha-766221 has defined MAC address 52:54:00:2c:07:45 in network mk-ha-766221
	I0812 10:40:25.669624   26234 main.go:141] libmachine: (ha-766221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:07:45", ip: ""} in network mk-ha-766221: {Iface:virbr1 ExpiryTime:2024-08-12 11:34:50 +0000 UTC Type:0 Mac:52:54:00:2c:07:45 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-766221 Clientid:01:52:54:00:2c:07:45}
	I0812 10:40:25.669671   26234 main.go:141] libmachine: (ha-766221) DBG | domain ha-766221 has defined IP address 192.168.39.2 and MAC address 52:54:00:2c:07:45 in network mk-ha-766221
	I0812 10:40:25.669877   26234 host.go:66] Checking if "ha-766221" exists ...
	I0812 10:40:25.670333   26234 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0812 10:40:25.670384   26234 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:40:25.685939   26234 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39885
	I0812 10:40:25.686506   26234 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:40:25.687054   26234 main.go:141] libmachine: Using API Version  1
	I0812 10:40:25.687080   26234 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:40:25.687543   26234 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:40:25.687795   26234 main.go:141] libmachine: (ha-766221) Calling .DriverName
	I0812 10:40:25.688018   26234 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 10:40:25.688046   26234 main.go:141] libmachine: (ha-766221) Calling .GetSSHHostname
	I0812 10:40:25.691170   26234 main.go:141] libmachine: (ha-766221) DBG | domain ha-766221 has defined MAC address 52:54:00:2c:07:45 in network mk-ha-766221
	I0812 10:40:25.691642   26234 main.go:141] libmachine: (ha-766221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:07:45", ip: ""} in network mk-ha-766221: {Iface:virbr1 ExpiryTime:2024-08-12 11:34:50 +0000 UTC Type:0 Mac:52:54:00:2c:07:45 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-766221 Clientid:01:52:54:00:2c:07:45}
	I0812 10:40:25.691676   26234 main.go:141] libmachine: (ha-766221) DBG | domain ha-766221 has defined IP address 192.168.39.2 and MAC address 52:54:00:2c:07:45 in network mk-ha-766221
	I0812 10:40:25.691845   26234 main.go:141] libmachine: (ha-766221) Calling .GetSSHPort
	I0812 10:40:25.692027   26234 main.go:141] libmachine: (ha-766221) Calling .GetSSHKeyPath
	I0812 10:40:25.692202   26234 main.go:141] libmachine: (ha-766221) Calling .GetSSHUsername
	I0812 10:40:25.692355   26234 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3796/.minikube/machines/ha-766221/id_rsa Username:docker}
	I0812 10:40:25.778320   26234 ssh_runner.go:195] Run: systemctl --version
	I0812 10:40:25.785423   26234 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 10:40:25.803920   26234 kubeconfig.go:125] found "ha-766221" server: "https://192.168.39.254:8443"
	I0812 10:40:25.803950   26234 api_server.go:166] Checking apiserver status ...
	I0812 10:40:25.803979   26234 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 10:40:25.819588   26234 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2003/cgroup
	W0812 10:40:25.830752   26234 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2003/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0812 10:40:25.830838   26234 ssh_runner.go:195] Run: ls
	I0812 10:40:25.836173   26234 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0812 10:40:25.841435   26234 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0812 10:40:25.841463   26234 status.go:422] ha-766221 apiserver status = Running (err=<nil>)
	I0812 10:40:25.841474   26234 status.go:257] ha-766221 status: &{Name:ha-766221 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0812 10:40:25.841491   26234 status.go:255] checking status of ha-766221-m02 ...
	I0812 10:40:25.841808   26234 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0812 10:40:25.841850   26234 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:40:25.857393   26234 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37233
	I0812 10:40:25.857927   26234 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:40:25.858506   26234 main.go:141] libmachine: Using API Version  1
	I0812 10:40:25.858529   26234 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:40:25.858934   26234 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:40:25.859112   26234 main.go:141] libmachine: (ha-766221-m02) Calling .GetState
	I0812 10:40:25.861478   26234 status.go:330] ha-766221-m02 host status = "Stopped" (err=<nil>)
	I0812 10:40:25.861498   26234 status.go:343] host is not running, skipping remaining checks
	I0812 10:40:25.861507   26234 status.go:257] ha-766221-m02 status: &{Name:ha-766221-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0812 10:40:25.861532   26234 status.go:255] checking status of ha-766221-m03 ...
	I0812 10:40:25.861969   26234 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0812 10:40:25.862054   26234 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:40:25.880321   26234 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40349
	I0812 10:40:25.880826   26234 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:40:25.881429   26234 main.go:141] libmachine: Using API Version  1
	I0812 10:40:25.881453   26234 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:40:25.881839   26234 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:40:25.882229   26234 main.go:141] libmachine: (ha-766221-m03) Calling .GetState
	I0812 10:40:25.883791   26234 status.go:330] ha-766221-m03 host status = "Running" (err=<nil>)
	I0812 10:40:25.883810   26234 host.go:66] Checking if "ha-766221-m03" exists ...
	I0812 10:40:25.884187   26234 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0812 10:40:25.884241   26234 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:40:25.900543   26234 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36301
	I0812 10:40:25.901165   26234 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:40:25.901765   26234 main.go:141] libmachine: Using API Version  1
	I0812 10:40:25.901791   26234 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:40:25.902163   26234 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:40:25.902352   26234 main.go:141] libmachine: (ha-766221-m03) Calling .GetIP
	I0812 10:40:25.906001   26234 main.go:141] libmachine: (ha-766221-m03) DBG | domain ha-766221-m03 has defined MAC address 52:54:00:c3:e5:7f in network mk-ha-766221
	I0812 10:40:25.906604   26234 main.go:141] libmachine: (ha-766221-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:e5:7f", ip: ""} in network mk-ha-766221: {Iface:virbr1 ExpiryTime:2024-08-12 11:37:27 +0000 UTC Type:0 Mac:52:54:00:c3:e5:7f Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:ha-766221-m03 Clientid:01:52:54:00:c3:e5:7f}
	I0812 10:40:25.906631   26234 main.go:141] libmachine: (ha-766221-m03) DBG | domain ha-766221-m03 has defined IP address 192.168.39.21 and MAC address 52:54:00:c3:e5:7f in network mk-ha-766221
	I0812 10:40:25.906855   26234 host.go:66] Checking if "ha-766221-m03" exists ...
	I0812 10:40:25.907172   26234 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0812 10:40:25.907210   26234 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:40:25.922605   26234 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38289
	I0812 10:40:25.923081   26234 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:40:25.923701   26234 main.go:141] libmachine: Using API Version  1
	I0812 10:40:25.923739   26234 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:40:25.924115   26234 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:40:25.924377   26234 main.go:141] libmachine: (ha-766221-m03) Calling .DriverName
	I0812 10:40:25.924660   26234 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 10:40:25.924695   26234 main.go:141] libmachine: (ha-766221-m03) Calling .GetSSHHostname
	I0812 10:40:25.928571   26234 main.go:141] libmachine: (ha-766221-m03) DBG | domain ha-766221-m03 has defined MAC address 52:54:00:c3:e5:7f in network mk-ha-766221
	I0812 10:40:25.929157   26234 main.go:141] libmachine: (ha-766221-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:e5:7f", ip: ""} in network mk-ha-766221: {Iface:virbr1 ExpiryTime:2024-08-12 11:37:27 +0000 UTC Type:0 Mac:52:54:00:c3:e5:7f Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:ha-766221-m03 Clientid:01:52:54:00:c3:e5:7f}
	I0812 10:40:25.929179   26234 main.go:141] libmachine: (ha-766221-m03) DBG | domain ha-766221-m03 has defined IP address 192.168.39.21 and MAC address 52:54:00:c3:e5:7f in network mk-ha-766221
	I0812 10:40:25.929452   26234 main.go:141] libmachine: (ha-766221-m03) Calling .GetSSHPort
	I0812 10:40:25.929692   26234 main.go:141] libmachine: (ha-766221-m03) Calling .GetSSHKeyPath
	I0812 10:40:25.929871   26234 main.go:141] libmachine: (ha-766221-m03) Calling .GetSSHUsername
	I0812 10:40:25.930020   26234 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3796/.minikube/machines/ha-766221-m03/id_rsa Username:docker}
	I0812 10:40:26.020529   26234 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 10:40:26.041087   26234 kubeconfig.go:125] found "ha-766221" server: "https://192.168.39.254:8443"
	I0812 10:40:26.041118   26234 api_server.go:166] Checking apiserver status ...
	I0812 10:40:26.041172   26234 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 10:40:26.059244   26234 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1958/cgroup
	W0812 10:40:26.072145   26234 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1958/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0812 10:40:26.072211   26234 ssh_runner.go:195] Run: ls
	I0812 10:40:26.077579   26234 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0812 10:40:26.082712   26234 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0812 10:40:26.082738   26234 status.go:422] ha-766221-m03 apiserver status = Running (err=<nil>)
	I0812 10:40:26.082748   26234 status.go:257] ha-766221-m03 status: &{Name:ha-766221-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0812 10:40:26.082768   26234 status.go:255] checking status of ha-766221-m04 ...
	I0812 10:40:26.083086   26234 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0812 10:40:26.083123   26234 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:40:26.098996   26234 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40117
	I0812 10:40:26.099519   26234 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:40:26.100001   26234 main.go:141] libmachine: Using API Version  1
	I0812 10:40:26.100026   26234 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:40:26.100358   26234 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:40:26.100587   26234 main.go:141] libmachine: (ha-766221-m04) Calling .GetState
	I0812 10:40:26.102206   26234 status.go:330] ha-766221-m04 host status = "Running" (err=<nil>)
	I0812 10:40:26.102225   26234 host.go:66] Checking if "ha-766221-m04" exists ...
	I0812 10:40:26.102545   26234 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0812 10:40:26.102595   26234 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:40:26.118321   26234 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33467
	I0812 10:40:26.118823   26234 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:40:26.119345   26234 main.go:141] libmachine: Using API Version  1
	I0812 10:40:26.119370   26234 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:40:26.119788   26234 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:40:26.120000   26234 main.go:141] libmachine: (ha-766221-m04) Calling .GetIP
	I0812 10:40:26.122888   26234 main.go:141] libmachine: (ha-766221-m04) DBG | domain ha-766221-m04 has defined MAC address 52:54:00:b0:e1:01 in network mk-ha-766221
	I0812 10:40:26.123446   26234 main.go:141] libmachine: (ha-766221-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:e1:01", ip: ""} in network mk-ha-766221: {Iface:virbr1 ExpiryTime:2024-08-12 11:39:06 +0000 UTC Type:0 Mac:52:54:00:b0:e1:01 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:ha-766221-m04 Clientid:01:52:54:00:b0:e1:01}
	I0812 10:40:26.123473   26234 main.go:141] libmachine: (ha-766221-m04) DBG | domain ha-766221-m04 has defined IP address 192.168.39.20 and MAC address 52:54:00:b0:e1:01 in network mk-ha-766221
	I0812 10:40:26.123694   26234 host.go:66] Checking if "ha-766221-m04" exists ...
	I0812 10:40:26.124123   26234 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0812 10:40:26.124176   26234 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:40:26.141100   26234 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40977
	I0812 10:40:26.141638   26234 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:40:26.142406   26234 main.go:141] libmachine: Using API Version  1
	I0812 10:40:26.142464   26234 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:40:26.142813   26234 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:40:26.143008   26234 main.go:141] libmachine: (ha-766221-m04) Calling .DriverName
	I0812 10:40:26.143230   26234 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 10:40:26.143276   26234 main.go:141] libmachine: (ha-766221-m04) Calling .GetSSHHostname
	I0812 10:40:26.148424   26234 main.go:141] libmachine: (ha-766221-m04) DBG | domain ha-766221-m04 has defined MAC address 52:54:00:b0:e1:01 in network mk-ha-766221
	I0812 10:40:26.149318   26234 main.go:141] libmachine: (ha-766221-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:e1:01", ip: ""} in network mk-ha-766221: {Iface:virbr1 ExpiryTime:2024-08-12 11:39:06 +0000 UTC Type:0 Mac:52:54:00:b0:e1:01 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:ha-766221-m04 Clientid:01:52:54:00:b0:e1:01}
	I0812 10:40:26.149393   26234 main.go:141] libmachine: (ha-766221-m04) DBG | domain ha-766221-m04 has defined IP address 192.168.39.20 and MAC address 52:54:00:b0:e1:01 in network mk-ha-766221
	I0812 10:40:26.149672   26234 main.go:141] libmachine: (ha-766221-m04) Calling .GetSSHPort
	I0812 10:40:26.150085   26234 main.go:141] libmachine: (ha-766221-m04) Calling .GetSSHKeyPath
	I0812 10:40:26.150335   26234 main.go:141] libmachine: (ha-766221-m04) Calling .GetSSHUsername
	I0812 10:40:26.150563   26234 sshutil.go:53] new ssh client: &{IP:192.168.39.20 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3796/.minikube/machines/ha-766221-m04/id_rsa Username:docker}
	I0812 10:40:26.242815   26234 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 10:40:26.261376   26234 status.go:257] ha-766221-m04 status: &{Name:ha-766221-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (14.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (48.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-766221 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p ha-766221 node start m02 -v=7 --alsologtostderr: (47.714286495s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-766221 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (48.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (259.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-766221 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-766221 -v=7 --alsologtostderr
E0812 10:41:27.585764   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/functional-470148/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-linux-amd64 stop -p ha-766221 -v=7 --alsologtostderr: (41.792525779s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-766221 --wait=true -v=7 --alsologtostderr
E0812 10:43:43.738198   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/functional-470148/client.crt: no such file or directory
E0812 10:44:11.426370   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/functional-470148/client.crt: no such file or directory
E0812 10:45:16.594106   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/addons-705597/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-766221 --wait=true -v=7 --alsologtostderr: (3m37.117128061s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-766221
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (259.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (8.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-766221 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-766221 node delete m03 -v=7 --alsologtostderr: (7.94237011s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-766221 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (8.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (39.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-766221 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-amd64 -p ha-766221 stop -v=7 --alsologtostderr: (38.975696678s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-766221 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-766221 status -v=7 --alsologtostderr: exit status 7 (123.241286ms)

                                                
                                                
-- stdout --
	ha-766221
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-766221-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-766221-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 10:46:23.256365   28683 out.go:291] Setting OutFile to fd 1 ...
	I0812 10:46:23.256655   28683 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 10:46:23.256662   28683 out.go:304] Setting ErrFile to fd 2...
	I0812 10:46:23.256666   28683 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 10:46:23.256854   28683 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19409-3796/.minikube/bin
	I0812 10:46:23.257038   28683 out.go:298] Setting JSON to false
	I0812 10:46:23.257063   28683 mustload.go:65] Loading cluster: ha-766221
	I0812 10:46:23.257247   28683 notify.go:220] Checking for updates...
	I0812 10:46:23.257477   28683 config.go:182] Loaded profile config "ha-766221": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0812 10:46:23.257496   28683 status.go:255] checking status of ha-766221 ...
	I0812 10:46:23.257873   28683 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0812 10:46:23.257948   28683 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:46:23.286565   28683 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43527
	I0812 10:46:23.287041   28683 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:46:23.287785   28683 main.go:141] libmachine: Using API Version  1
	I0812 10:46:23.287829   28683 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:46:23.288286   28683 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:46:23.288522   28683 main.go:141] libmachine: (ha-766221) Calling .GetState
	I0812 10:46:23.290631   28683 status.go:330] ha-766221 host status = "Stopped" (err=<nil>)
	I0812 10:46:23.290656   28683 status.go:343] host is not running, skipping remaining checks
	I0812 10:46:23.290663   28683 status.go:257] ha-766221 status: &{Name:ha-766221 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0812 10:46:23.290680   28683 status.go:255] checking status of ha-766221-m02 ...
	I0812 10:46:23.290960   28683 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0812 10:46:23.290995   28683 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:46:23.306019   28683 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41755
	I0812 10:46:23.306522   28683 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:46:23.307015   28683 main.go:141] libmachine: Using API Version  1
	I0812 10:46:23.307039   28683 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:46:23.307315   28683 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:46:23.307575   28683 main.go:141] libmachine: (ha-766221-m02) Calling .GetState
	I0812 10:46:23.309566   28683 status.go:330] ha-766221-m02 host status = "Stopped" (err=<nil>)
	I0812 10:46:23.309585   28683 status.go:343] host is not running, skipping remaining checks
	I0812 10:46:23.309594   28683 status.go:257] ha-766221-m02 status: &{Name:ha-766221-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0812 10:46:23.309624   28683 status.go:255] checking status of ha-766221-m04 ...
	I0812 10:46:23.310096   28683 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0812 10:46:23.310151   28683 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 10:46:23.326215   28683 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39133
	I0812 10:46:23.326847   28683 main.go:141] libmachine: () Calling .GetVersion
	I0812 10:46:23.327460   28683 main.go:141] libmachine: Using API Version  1
	I0812 10:46:23.327483   28683 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 10:46:23.327871   28683 main.go:141] libmachine: () Calling .GetMachineName
	I0812 10:46:23.328104   28683 main.go:141] libmachine: (ha-766221-m04) Calling .GetState
	I0812 10:46:23.330085   28683 status.go:330] ha-766221-m04 host status = "Stopped" (err=<nil>)
	I0812 10:46:23.330102   28683 status.go:343] host is not running, skipping remaining checks
	I0812 10:46:23.330111   28683 status.go:257] ha-766221-m04 status: &{Name:ha-766221-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (39.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (271.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-766221 --wait=true -v=7 --alsologtostderr --driver=kvm2 
E0812 10:46:39.639982   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/addons-705597/client.crt: no such file or directory
E0812 10:48:43.738426   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/functional-470148/client.crt: no such file or directory
E0812 10:50:16.594050   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/addons-705597/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-766221 --wait=true -v=7 --alsologtostderr --driver=kvm2 : (4m30.978863114s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-766221 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (271.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-766221 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-766221 --control-plane -v=7 --alsologtostderr: (1m30.112996062s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-766221 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (91.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.55s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (52.75s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-698335 --driver=kvm2 
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-698335 --driver=kvm2 : (52.745536908s)
--- PASS: TestImageBuild/serial/Setup (52.75s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (2.09s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-698335
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-698335: (2.085115367s)
--- PASS: TestImageBuild/serial/NormalBuild (2.09s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.14s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-698335
image_test.go:99: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-698335: (1.141714615s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.14s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.84s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-698335
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.84s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.8s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-698335
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.80s)

                                                
                                    
x
+
TestJSONOutput/start/Command (70.44s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-988682 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 
E0812 10:53:43.741320   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/functional-470148/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-988682 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 : (1m10.435853159s)
--- PASS: TestJSONOutput/start/Command (70.44s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-988682 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-988682 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.57s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-988682 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-988682 --output=json --user=testUser: (7.570334634s)
--- PASS: TestJSONOutput/stop/Command (7.57s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-349901 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-349901 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (64.422806ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"68c98f56-4abc-4de2-a13d-343fd120b5d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-349901] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8e595eed-8742-41d2-9b80-b58560ea3780","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19409"}}
	{"specversion":"1.0","id":"7a10347f-8d11-4399-9dc2-d17a5cccbcd7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"1118b260-719b-4ad4-9de0-83c1ab29468b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19409-3796/kubeconfig"}}
	{"specversion":"1.0","id":"5f6a582a-5eac-4cf1-b938-7a3926d4fcc3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19409-3796/.minikube"}}
	{"specversion":"1.0","id":"99c37447-9f87-4b89-bd9b-1ada7ad3a952","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"98f39385-0c00-40f8-83f3-c98fb408ec8a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"23bf0664-ec0e-4303-9676-d4af7862ac2e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-349901" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-349901
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (115.4s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-207045 --driver=kvm2 
E0812 10:55:06.786684   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/functional-470148/client.crt: no such file or directory
E0812 10:55:16.594447   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/addons-705597/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-207045 --driver=kvm2 : (58.899853025s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-209658 --driver=kvm2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-209658 --driver=kvm2 : (53.562503025s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-207045
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-209658
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-209658" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-209658
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-209658: (1.01162814s)
helpers_test.go:175: Cleaning up "first-207045" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-207045
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-207045: (1.011913228s)
--- PASS: TestMinikubeProfile (115.40s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (31.92s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-853980 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-853980 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 : (30.920913414s)
--- PASS: TestMountStart/serial/StartWithMountFirst (31.92s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-853980 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-853980 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (32.78s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-867557 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-867557 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 : (31.781609615s)
--- PASS: TestMountStart/serial/StartWithMountSecond (32.78s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-867557 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-867557 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.17s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-853980 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-853980 --alsologtostderr -v=5: (1.172408087s)
--- PASS: TestMountStart/serial/DeleteFirst (1.17s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-867557 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-867557 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.40s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-867557
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-867557: (2.290674946s)
--- PASS: TestMountStart/serial/Stop (2.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (27.07s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-867557
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-867557: (26.067207883s)
--- PASS: TestMountStart/serial/RestartStopped (27.07s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-867557 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-867557 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.41s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (140.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-709864 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 
E0812 10:58:43.740510   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/functional-470148/client.crt: no such file or directory
E0812 11:00:16.594614   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/addons-705597/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-709864 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 : (2m19.666668372s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709864 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (140.10s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-709864 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-709864 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-709864 -- rollout status deployment/busybox: (2.814510741s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-709864 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-709864 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-709864 -- exec busybox-fc5497c4f-2tzhd -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-709864 -- exec busybox-fc5497c4f-j9kx9 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-709864 -- exec busybox-fc5497c4f-2tzhd -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-709864 -- exec busybox-fc5497c4f-j9kx9 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-709864 -- exec busybox-fc5497c4f-2tzhd -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-709864 -- exec busybox-fc5497c4f-j9kx9 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.45s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-709864 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-709864 -- exec busybox-fc5497c4f-2tzhd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-709864 -- exec busybox-fc5497c4f-2tzhd -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-709864 -- exec busybox-fc5497c4f-j9kx9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-709864 -- exec busybox-fc5497c4f-j9kx9 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.88s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (57.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-709864 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-709864 -v 3 --alsologtostderr: (56.675534901s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709864 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (57.27s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-709864 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.23s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709864 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709864 cp testdata/cp-test.txt multinode-709864:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709864 ssh -n multinode-709864 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709864 cp multinode-709864:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile622836300/001/cp-test_multinode-709864.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709864 ssh -n multinode-709864 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709864 cp multinode-709864:/home/docker/cp-test.txt multinode-709864-m02:/home/docker/cp-test_multinode-709864_multinode-709864-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709864 ssh -n multinode-709864 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709864 ssh -n multinode-709864-m02 "sudo cat /home/docker/cp-test_multinode-709864_multinode-709864-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709864 cp multinode-709864:/home/docker/cp-test.txt multinode-709864-m03:/home/docker/cp-test_multinode-709864_multinode-709864-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709864 ssh -n multinode-709864 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709864 ssh -n multinode-709864-m03 "sudo cat /home/docker/cp-test_multinode-709864_multinode-709864-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709864 cp testdata/cp-test.txt multinode-709864-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709864 ssh -n multinode-709864-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709864 cp multinode-709864-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile622836300/001/cp-test_multinode-709864-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709864 ssh -n multinode-709864-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709864 cp multinode-709864-m02:/home/docker/cp-test.txt multinode-709864:/home/docker/cp-test_multinode-709864-m02_multinode-709864.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709864 ssh -n multinode-709864-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709864 ssh -n multinode-709864 "sudo cat /home/docker/cp-test_multinode-709864-m02_multinode-709864.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709864 cp multinode-709864-m02:/home/docker/cp-test.txt multinode-709864-m03:/home/docker/cp-test_multinode-709864-m02_multinode-709864-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709864 ssh -n multinode-709864-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709864 ssh -n multinode-709864-m03 "sudo cat /home/docker/cp-test_multinode-709864-m02_multinode-709864-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709864 cp testdata/cp-test.txt multinode-709864-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709864 ssh -n multinode-709864-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709864 cp multinode-709864-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile622836300/001/cp-test_multinode-709864-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709864 ssh -n multinode-709864-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709864 cp multinode-709864-m03:/home/docker/cp-test.txt multinode-709864:/home/docker/cp-test_multinode-709864-m03_multinode-709864.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709864 ssh -n multinode-709864-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709864 ssh -n multinode-709864 "sudo cat /home/docker/cp-test_multinode-709864-m03_multinode-709864.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709864 cp multinode-709864-m03:/home/docker/cp-test.txt multinode-709864-m02:/home/docker/cp-test_multinode-709864-m03_multinode-709864-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709864 ssh -n multinode-709864-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709864 ssh -n multinode-709864-m02 "sudo cat /home/docker/cp-test_multinode-709864-m03_multinode-709864-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.68s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709864 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-709864 node stop m03: (2.535088205s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709864 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-709864 status: exit status 7 (444.431831ms)

                                                
                                                
-- stdout --
	multinode-709864
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-709864-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-709864-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709864 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-709864 status --alsologtostderr: exit status 7 (443.20762ms)

                                                
                                                
-- stdout --
	multinode-709864
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-709864-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-709864-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 11:01:56.150290   37748 out.go:291] Setting OutFile to fd 1 ...
	I0812 11:01:56.150422   37748 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 11:01:56.150432   37748 out.go:304] Setting ErrFile to fd 2...
	I0812 11:01:56.150436   37748 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 11:01:56.150668   37748 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19409-3796/.minikube/bin
	I0812 11:01:56.150851   37748 out.go:298] Setting JSON to false
	I0812 11:01:56.150876   37748 mustload.go:65] Loading cluster: multinode-709864
	I0812 11:01:56.150966   37748 notify.go:220] Checking for updates...
	I0812 11:01:56.151333   37748 config.go:182] Loaded profile config "multinode-709864": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0812 11:01:56.151350   37748 status.go:255] checking status of multinode-709864 ...
	I0812 11:01:56.151800   37748 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0812 11:01:56.151881   37748 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:01:56.168199   37748 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42649
	I0812 11:01:56.168578   37748 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:01:56.169169   37748 main.go:141] libmachine: Using API Version  1
	I0812 11:01:56.169188   37748 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:01:56.169559   37748 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:01:56.169828   37748 main.go:141] libmachine: (multinode-709864) Calling .GetState
	I0812 11:01:56.171515   37748 status.go:330] multinode-709864 host status = "Running" (err=<nil>)
	I0812 11:01:56.171536   37748 host.go:66] Checking if "multinode-709864" exists ...
	I0812 11:01:56.171856   37748 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0812 11:01:56.171906   37748 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:01:56.187270   37748 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42685
	I0812 11:01:56.187651   37748 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:01:56.188137   37748 main.go:141] libmachine: Using API Version  1
	I0812 11:01:56.188164   37748 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:01:56.188490   37748 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:01:56.188666   37748 main.go:141] libmachine: (multinode-709864) Calling .GetIP
	I0812 11:01:56.191326   37748 main.go:141] libmachine: (multinode-709864) DBG | domain multinode-709864 has defined MAC address 52:54:00:75:cc:ef in network mk-multinode-709864
	I0812 11:01:56.191764   37748 main.go:141] libmachine: (multinode-709864) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cc:ef", ip: ""} in network mk-multinode-709864: {Iface:virbr1 ExpiryTime:2024-08-12 11:58:37 +0000 UTC Type:0 Mac:52:54:00:75:cc:ef Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:multinode-709864 Clientid:01:52:54:00:75:cc:ef}
	I0812 11:01:56.191800   37748 main.go:141] libmachine: (multinode-709864) DBG | domain multinode-709864 has defined IP address 192.168.39.48 and MAC address 52:54:00:75:cc:ef in network mk-multinode-709864
	I0812 11:01:56.191910   37748 host.go:66] Checking if "multinode-709864" exists ...
	I0812 11:01:56.192245   37748 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0812 11:01:56.192297   37748 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:01:56.207537   37748 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41579
	I0812 11:01:56.207918   37748 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:01:56.208432   37748 main.go:141] libmachine: Using API Version  1
	I0812 11:01:56.208458   37748 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:01:56.208799   37748 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:01:56.209012   37748 main.go:141] libmachine: (multinode-709864) Calling .DriverName
	I0812 11:01:56.209186   37748 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 11:01:56.209214   37748 main.go:141] libmachine: (multinode-709864) Calling .GetSSHHostname
	I0812 11:01:56.211801   37748 main.go:141] libmachine: (multinode-709864) DBG | domain multinode-709864 has defined MAC address 52:54:00:75:cc:ef in network mk-multinode-709864
	I0812 11:01:56.212160   37748 main.go:141] libmachine: (multinode-709864) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cc:ef", ip: ""} in network mk-multinode-709864: {Iface:virbr1 ExpiryTime:2024-08-12 11:58:37 +0000 UTC Type:0 Mac:52:54:00:75:cc:ef Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:multinode-709864 Clientid:01:52:54:00:75:cc:ef}
	I0812 11:01:56.212192   37748 main.go:141] libmachine: (multinode-709864) DBG | domain multinode-709864 has defined IP address 192.168.39.48 and MAC address 52:54:00:75:cc:ef in network mk-multinode-709864
	I0812 11:01:56.212340   37748 main.go:141] libmachine: (multinode-709864) Calling .GetSSHPort
	I0812 11:01:56.212550   37748 main.go:141] libmachine: (multinode-709864) Calling .GetSSHKeyPath
	I0812 11:01:56.212711   37748 main.go:141] libmachine: (multinode-709864) Calling .GetSSHUsername
	I0812 11:01:56.212815   37748 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3796/.minikube/machines/multinode-709864/id_rsa Username:docker}
	I0812 11:01:56.297656   37748 ssh_runner.go:195] Run: systemctl --version
	I0812 11:01:56.304238   37748 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 11:01:56.323004   37748 kubeconfig.go:125] found "multinode-709864" server: "https://192.168.39.48:8443"
	I0812 11:01:56.323036   37748 api_server.go:166] Checking apiserver status ...
	I0812 11:01:56.323067   37748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:01:56.339909   37748 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1951/cgroup
	W0812 11:01:56.351638   37748 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1951/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0812 11:01:56.351698   37748 ssh_runner.go:195] Run: ls
	I0812 11:01:56.356435   37748 api_server.go:253] Checking apiserver healthz at https://192.168.39.48:8443/healthz ...
	I0812 11:01:56.361630   37748 api_server.go:279] https://192.168.39.48:8443/healthz returned 200:
	ok
	I0812 11:01:56.361652   37748 status.go:422] multinode-709864 apiserver status = Running (err=<nil>)
	I0812 11:01:56.361675   37748 status.go:257] multinode-709864 status: &{Name:multinode-709864 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0812 11:01:56.361702   37748 status.go:255] checking status of multinode-709864-m02 ...
	I0812 11:01:56.362108   37748 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0812 11:01:56.362152   37748 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:01:56.378097   37748 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45133
	I0812 11:01:56.378502   37748 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:01:56.378957   37748 main.go:141] libmachine: Using API Version  1
	I0812 11:01:56.378975   37748 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:01:56.379320   37748 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:01:56.379537   37748 main.go:141] libmachine: (multinode-709864-m02) Calling .GetState
	I0812 11:01:56.381032   37748 status.go:330] multinode-709864-m02 host status = "Running" (err=<nil>)
	I0812 11:01:56.381047   37748 host.go:66] Checking if "multinode-709864-m02" exists ...
	I0812 11:01:56.381330   37748 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0812 11:01:56.381394   37748 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:01:56.396949   37748 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33297
	I0812 11:01:56.397361   37748 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:01:56.397907   37748 main.go:141] libmachine: Using API Version  1
	I0812 11:01:56.397928   37748 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:01:56.398271   37748 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:01:56.398463   37748 main.go:141] libmachine: (multinode-709864-m02) Calling .GetIP
	I0812 11:01:56.401342   37748 main.go:141] libmachine: (multinode-709864-m02) DBG | domain multinode-709864-m02 has defined MAC address 52:54:00:1a:98:5c in network mk-multinode-709864
	I0812 11:01:56.401796   37748 main.go:141] libmachine: (multinode-709864-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:98:5c", ip: ""} in network mk-multinode-709864: {Iface:virbr1 ExpiryTime:2024-08-12 12:00:00 +0000 UTC Type:0 Mac:52:54:00:1a:98:5c Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-709864-m02 Clientid:01:52:54:00:1a:98:5c}
	I0812 11:01:56.401822   37748 main.go:141] libmachine: (multinode-709864-m02) DBG | domain multinode-709864-m02 has defined IP address 192.168.39.14 and MAC address 52:54:00:1a:98:5c in network mk-multinode-709864
	I0812 11:01:56.401981   37748 host.go:66] Checking if "multinode-709864-m02" exists ...
	I0812 11:01:56.402380   37748 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0812 11:01:56.402437   37748 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:01:56.417655   37748 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39433
	I0812 11:01:56.418065   37748 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:01:56.418559   37748 main.go:141] libmachine: Using API Version  1
	I0812 11:01:56.418579   37748 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:01:56.418905   37748 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:01:56.419083   37748 main.go:141] libmachine: (multinode-709864-m02) Calling .DriverName
	I0812 11:01:56.419280   37748 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 11:01:56.419304   37748 main.go:141] libmachine: (multinode-709864-m02) Calling .GetSSHHostname
	I0812 11:01:56.422483   37748 main.go:141] libmachine: (multinode-709864-m02) DBG | domain multinode-709864-m02 has defined MAC address 52:54:00:1a:98:5c in network mk-multinode-709864
	I0812 11:01:56.422946   37748 main.go:141] libmachine: (multinode-709864-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:98:5c", ip: ""} in network mk-multinode-709864: {Iface:virbr1 ExpiryTime:2024-08-12 12:00:00 +0000 UTC Type:0 Mac:52:54:00:1a:98:5c Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-709864-m02 Clientid:01:52:54:00:1a:98:5c}
	I0812 11:01:56.422981   37748 main.go:141] libmachine: (multinode-709864-m02) DBG | domain multinode-709864-m02 has defined IP address 192.168.39.14 and MAC address 52:54:00:1a:98:5c in network mk-multinode-709864
	I0812 11:01:56.423146   37748 main.go:141] libmachine: (multinode-709864-m02) Calling .GetSSHPort
	I0812 11:01:56.423333   37748 main.go:141] libmachine: (multinode-709864-m02) Calling .GetSSHKeyPath
	I0812 11:01:56.423527   37748 main.go:141] libmachine: (multinode-709864-m02) Calling .GetSSHUsername
	I0812 11:01:56.423747   37748 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3796/.minikube/machines/multinode-709864-m02/id_rsa Username:docker}
	I0812 11:01:56.510012   37748 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 11:01:56.526728   37748 status.go:257] multinode-709864-m02 status: &{Name:multinode-709864-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0812 11:01:56.526775   37748 status.go:255] checking status of multinode-709864-m03 ...
	I0812 11:01:56.527139   37748 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0812 11:01:56.527185   37748 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:01:56.544140   37748 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34635
	I0812 11:01:56.544682   37748 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:01:56.545238   37748 main.go:141] libmachine: Using API Version  1
	I0812 11:01:56.545267   37748 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:01:56.545685   37748 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:01:56.545906   37748 main.go:141] libmachine: (multinode-709864-m03) Calling .GetState
	I0812 11:01:56.547534   37748 status.go:330] multinode-709864-m03 host status = "Stopped" (err=<nil>)
	I0812 11:01:56.547549   37748 status.go:343] host is not running, skipping remaining checks
	I0812 11:01:56.547555   37748 status.go:257] multinode-709864-m03 status: &{Name:multinode-709864-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.42s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (43.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709864 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-709864 node start m03 -v=7 --alsologtostderr: (43.006766971s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709864 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (43.69s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (179.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-709864
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-709864
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-709864: (27.536975953s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-709864 --wait=true -v=8 --alsologtostderr
E0812 11:03:19.640382   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/addons-705597/client.crt: no such file or directory
E0812 11:03:43.738776   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/functional-470148/client.crt: no such file or directory
E0812 11:05:16.594005   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/addons-705597/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-709864 --wait=true -v=8 --alsologtostderr: (2m32.26897842s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-709864
--- PASS: TestMultiNode/serial/RestartKeepsNodes (179.90s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709864 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-709864 node delete m03: (1.991501366s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709864 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.55s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (25.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709864 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-709864 stop: (24.94633557s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709864 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-709864 status: exit status 7 (90.742287ms)

                                                
                                                
-- stdout --
	multinode-709864
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-709864-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709864 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-709864 status --alsologtostderr: exit status 7 (86.651605ms)

                                                
                                                
-- stdout --
	multinode-709864
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-709864-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 11:06:07.772623   39508 out.go:291] Setting OutFile to fd 1 ...
	I0812 11:06:07.772898   39508 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 11:06:07.772908   39508 out.go:304] Setting ErrFile to fd 2...
	I0812 11:06:07.772913   39508 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 11:06:07.773146   39508 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19409-3796/.minikube/bin
	I0812 11:06:07.773321   39508 out.go:298] Setting JSON to false
	I0812 11:06:07.773345   39508 mustload.go:65] Loading cluster: multinode-709864
	I0812 11:06:07.773398   39508 notify.go:220] Checking for updates...
	I0812 11:06:07.773749   39508 config.go:182] Loaded profile config "multinode-709864": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0812 11:06:07.773765   39508 status.go:255] checking status of multinode-709864 ...
	I0812 11:06:07.774222   39508 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0812 11:06:07.774286   39508 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:06:07.790708   39508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46703
	I0812 11:06:07.791146   39508 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:06:07.791763   39508 main.go:141] libmachine: Using API Version  1
	I0812 11:06:07.791787   39508 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:06:07.792150   39508 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:06:07.792384   39508 main.go:141] libmachine: (multinode-709864) Calling .GetState
	I0812 11:06:07.794278   39508 status.go:330] multinode-709864 host status = "Stopped" (err=<nil>)
	I0812 11:06:07.794294   39508 status.go:343] host is not running, skipping remaining checks
	I0812 11:06:07.794302   39508 status.go:257] multinode-709864 status: &{Name:multinode-709864 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0812 11:06:07.794326   39508 status.go:255] checking status of multinode-709864-m02 ...
	I0812 11:06:07.794650   39508 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0812 11:06:07.794698   39508 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:06:07.810824   39508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35989
	I0812 11:06:07.811268   39508 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:06:07.811743   39508 main.go:141] libmachine: Using API Version  1
	I0812 11:06:07.811775   39508 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:06:07.812167   39508 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:06:07.812388   39508 main.go:141] libmachine: (multinode-709864-m02) Calling .GetState
	I0812 11:06:07.814076   39508 status.go:330] multinode-709864-m02 host status = "Stopped" (err=<nil>)
	I0812 11:06:07.814090   39508 status.go:343] host is not running, skipping remaining checks
	I0812 11:06:07.814095   39508 status.go:257] multinode-709864-m02 status: &{Name:multinode-709864-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (25.12s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (125.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-709864 --wait=true -v=8 --alsologtostderr --driver=kvm2 
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-709864 --wait=true -v=8 --alsologtostderr --driver=kvm2 : (2m4.884351378s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-709864 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (125.44s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (54.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-709864
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-709864-m02 --driver=kvm2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-709864-m02 --driver=kvm2 : exit status 14 (67.181372ms)

                                                
                                                
-- stdout --
	* [multinode-709864-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19409-3796/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19409-3796/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-709864-m02' is duplicated with machine name 'multinode-709864-m02' in profile 'multinode-709864'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-709864-m03 --driver=kvm2 
E0812 11:08:43.738209   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/functional-470148/client.crt: no such file or directory
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-709864-m03 --driver=kvm2 : (53.225508558s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-709864
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-709864: exit status 80 (224.719668ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-709864 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-709864-m03 already exists in multinode-709864-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-709864-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (54.56s)

                                                
                                    
x
+
TestPreload (202.63s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-029974 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4
E0812 11:10:16.593979   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/addons-705597/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-029974 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4: (2m9.990681113s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-029974 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-029974 image pull gcr.io/k8s-minikube/busybox: (1.331624494s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-029974
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-029974: (12.569360289s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-029974 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 
E0812 11:11:46.786925   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/functional-470148/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-029974 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 : (57.43139371s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-029974 image list
helpers_test.go:175: Cleaning up "test-preload-029974" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-029974
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-029974: (1.101740789s)
--- PASS: TestPreload (202.63s)

                                                
                                    
x
+
TestScheduledStopUnix (123.02s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-435059 --memory=2048 --driver=kvm2 
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-435059 --memory=2048 --driver=kvm2 : (51.2837868s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-435059 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-435059 -n scheduled-stop-435059
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-435059 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-435059 --cancel-scheduled
E0812 11:13:43.738730   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/functional-470148/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-435059 -n scheduled-stop-435059
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-435059
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-435059 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-435059
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-435059: exit status 7 (76.60972ms)

                                                
                                                
-- stdout --
	scheduled-stop-435059
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-435059 -n scheduled-stop-435059
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-435059 -n scheduled-stop-435059: exit status 7 (66.898302ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-435059" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-435059
--- PASS: TestScheduledStopUnix (123.02s)

                                                
                                    
x
+
TestSkaffold (140.05s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe34470121 version
skaffold_test.go:63: skaffold version: v2.13.1
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-628488 --memory=2600 --driver=kvm2 
E0812 11:15:16.594355   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/addons-705597/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-628488 --memory=2600 --driver=kvm2 : (51.234308464s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/KVM_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe34470121 run --minikube-profile skaffold-628488 --kube-context skaffold-628488 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe34470121 run --minikube-profile skaffold-628488 --kube-context skaffold-628488 --status-check=true --port-forward=false --interactive=false: (1m15.672997105s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-5458b4bb44-459r4" [da2b5daf-55b0-42c4-a8b0-83ecd0dfd919] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.004270747s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-6b4b79f449-kjpdf" [ffc9f9fa-cc8b-46b6-a545-2d4de6fc25a7] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.004848505s
helpers_test.go:175: Cleaning up "skaffold-628488" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-628488
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-628488: (1.252490788s)
--- PASS: TestSkaffold (140.05s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (229.14s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.436142404 start -p running-upgrade-453798 --memory=2200 --vm-driver=kvm2 
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.436142404 start -p running-upgrade-453798 --memory=2200 --vm-driver=kvm2 : (2m24.201918071s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-453798 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-453798 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (1m22.872983844s)
helpers_test.go:175: Cleaning up "running-upgrade-453798" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-453798
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-453798: (1.360834781s)
--- PASS: TestRunningBinaryUpgrade (229.14s)

                                                
                                    
x
+
TestKubernetesUpgrade (220.44s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-145284 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-145284 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2 : (1m28.46093619s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-145284
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-145284: (12.868145515s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-145284 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-145284 status --format={{.Host}}: exit status 7 (78.914147ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-145284 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=kvm2 
E0812 11:18:43.738304   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/functional-470148/client.crt: no such file or directory
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-145284 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=kvm2 : (55.857161548s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-145284 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-145284 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2 
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-145284 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2 : exit status 106 (85.352581ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-145284] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19409-3796/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19409-3796/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0-rc.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-145284
	    minikube start -p kubernetes-upgrade-145284 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1452842 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0-rc.0, by running:
	    
	    minikube start -p kubernetes-upgrade-145284 --kubernetes-version=v1.31.0-rc.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-145284 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=kvm2 
E0812 11:19:59.641542   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/addons-705597/client.crt: no such file or directory
E0812 11:20:16.594129   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/addons-705597/client.crt: no such file or directory
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-145284 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=kvm2 : (1m1.732890685s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-145284" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-145284
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-145284: (1.290774815s)
--- PASS: TestKubernetesUpgrade (220.44s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.61s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.61s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (219.41s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2808231357 start -p stopped-upgrade-306905 --memory=2200 --vm-driver=kvm2 
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2808231357 start -p stopped-upgrade-306905 --memory=2200 --vm-driver=kvm2 : (2m5.640641599s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2808231357 -p stopped-upgrade-306905 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2808231357 -p stopped-upgrade-306905 stop: (12.551559241s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-306905 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-306905 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (1m21.217717492s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (219.41s)

                                                
                                    
x
+
TestPause/serial/Start (134.46s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-342687 --memory=2048 --install-addons=false --wait=all --driver=kvm2 
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-342687 --memory=2048 --install-addons=false --wait=all --driver=kvm2 : (2m14.460930487s)
--- PASS: TestPause/serial/Start (134.46s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.58s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-306905
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-306905: (1.580240707s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-243322 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-243322 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 : exit status 14 (86.459491ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-243322] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19409-3796/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19409-3796/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (54.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-243322 --driver=kvm2 
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-243322 --driver=kvm2 : (54.100507572s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-243322 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (54.37s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (111.15s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-342687 --alsologtostderr -v=1 --driver=kvm2 
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-342687 --alsologtostderr -v=1 --driver=kvm2 : (1m51.119819205s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (111.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (50.92s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-243322 --no-kubernetes --driver=kvm2 
E0812 11:21:43.040974   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/skaffold-628488/client.crt: no such file or directory
E0812 11:21:43.046300   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/skaffold-628488/client.crt: no such file or directory
E0812 11:21:43.056644   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/skaffold-628488/client.crt: no such file or directory
E0812 11:21:43.076970   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/skaffold-628488/client.crt: no such file or directory
E0812 11:21:43.117257   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/skaffold-628488/client.crt: no such file or directory
E0812 11:21:43.197725   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/skaffold-628488/client.crt: no such file or directory
E0812 11:21:43.358198   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/skaffold-628488/client.crt: no such file or directory
E0812 11:21:43.678830   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/skaffold-628488/client.crt: no such file or directory
E0812 11:21:44.319973   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/skaffold-628488/client.crt: no such file or directory
E0812 11:21:45.601056   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/skaffold-628488/client.crt: no such file or directory
E0812 11:21:48.161343   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/skaffold-628488/client.crt: no such file or directory
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-243322 --no-kubernetes --driver=kvm2 : (49.404020651s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-243322 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-243322 status -o json: exit status 2 (289.353517ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-243322","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-243322
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-243322: (1.227995719s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (50.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (32.64s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-243322 --no-kubernetes --driver=kvm2 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-243322 --no-kubernetes --driver=kvm2 : (32.644719951s)
--- PASS: TestNoKubernetes/serial/Start (32.64s)

                                                
                                    
x
+
TestPause/serial/Pause (1.17s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-342687 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-342687 --alsologtostderr -v=5: (1.167554838s)
--- PASS: TestPause/serial/Pause (1.17s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.28s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-342687 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-342687 --output=json --layout=cluster: exit status 2 (277.767069ms)

                                                
                                                
-- stdout --
	{"Name":"pause-342687","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-342687","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.28s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.63s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-342687 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-243322 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-243322 "sudo systemctl is-active --quiet service kubelet": exit status 1 (236.274289ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.12s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.82s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-342687 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.82s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-243322
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-243322: (2.306987303s)
--- PASS: TestNoKubernetes/serial/Stop (2.31s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.09s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-342687 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-342687 --alsologtostderr -v=5: (1.092631001s)
--- PASS: TestPause/serial/DeletePaused (1.09s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (4.58s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (4.581107955s)
--- PASS: TestPause/serial/VerifyDeletedResources (4.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (66.74s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-243322 --driver=kvm2 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-243322 --driver=kvm2 : (1m6.739709392s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (66.74s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-243322 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-243322 "sudo systemctl is-active --quiet service kubelet": exit status 1 (215.851711ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (239.86s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-679119 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0
E0812 11:24:26.885434   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/skaffold-628488/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-679119 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0: (3m59.857387192s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (239.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (136.77s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-208795 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.31.0-rc.0
E0812 11:25:16.593943   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/addons-705597/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-208795 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.31.0-rc.0: (2m16.769248576s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (136.77s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (132.77s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-387635 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.30.3
E0812 11:26:43.040882   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/skaffold-628488/client.crt: no such file or directory
E0812 11:27:10.726191   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/skaffold-628488/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-387635 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.30.3: (2m12.773848053s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (132.77s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-208795 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ca352ff1-15e5-48ef-a504-ea3084b0ce58] Pending
helpers_test.go:344: "busybox" [ca352ff1-15e5-48ef-a504-ea3084b0ce58] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ca352ff1-15e5-48ef-a504-ea3084b0ce58] Running
E0812 11:27:22.551567   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/gvisor-135153/client.crt: no such file or directory
E0812 11:27:22.556885   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/gvisor-135153/client.crt: no such file or directory
E0812 11:27:22.567249   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/gvisor-135153/client.crt: no such file or directory
E0812 11:27:22.587602   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/gvisor-135153/client.crt: no such file or directory
E0812 11:27:22.627927   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/gvisor-135153/client.crt: no such file or directory
E0812 11:27:22.708959   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/gvisor-135153/client.crt: no such file or directory
E0812 11:27:22.869440   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/gvisor-135153/client.crt: no such file or directory
E0812 11:27:23.190083   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/gvisor-135153/client.crt: no such file or directory
E0812 11:27:23.831311   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/gvisor-135153/client.crt: no such file or directory
E0812 11:27:25.112313   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/gvisor-135153/client.crt: no such file or directory
E0812 11:27:27.672615   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/gvisor-135153/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.005035081s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-208795 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-208795 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-208795 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.018830971s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-208795 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (13.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-208795 --alsologtostderr -v=3
E0812 11:27:32.793559   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/gvisor-135153/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-208795 --alsologtostderr -v=3: (13.379736239s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (13.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-208795 -n no-preload-208795
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-208795 -n no-preload-208795: exit status 7 (67.904525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-208795 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0812 11:27:43.033991   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/gvisor-135153/client.crt: no such file or directory
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (313.89s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-208795 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.31.0-rc.0
E0812 11:28:03.514455   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/gvisor-135153/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-208795 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.31.0-rc.0: (5m13.616257002s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-208795 -n no-preload-208795
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (313.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-679119 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [6f3e4df9-faeb-4c43-8c09-3e25901f043e] Pending
helpers_test.go:344: "busybox" [6f3e4df9-faeb-4c43-8c09-3e25901f043e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [6f3e4df9-faeb-4c43-8c09-3e25901f043e] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.005177111s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-679119 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-679119 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-679119 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.16443087s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-679119 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-387635 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [909c228b-fed0-4774-9032-bba9d5a6a2d6] Pending
helpers_test.go:344: "busybox" [909c228b-fed0-4774-9032-bba9d5a6a2d6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [909c228b-fed0-4774-9032-bba9d5a6a2d6] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.005814289s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-387635 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-679119 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-679119 --alsologtostderr -v=3: (13.391556151s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-387635 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-387635 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (13.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-387635 --alsologtostderr -v=3
E0812 11:28:26.788118   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/functional-470148/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-387635 --alsologtostderr -v=3: (13.346287103s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (13.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-679119 -n old-k8s-version-679119
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-679119 -n old-k8s-version-679119: exit status 7 (72.65093ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-679119 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (525.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-679119 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-679119 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0: (8m45.000042981s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-679119 -n old-k8s-version-679119
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (525.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-387635 -n embed-certs-387635
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-387635 -n embed-certs-387635: exit status 7 (71.396224ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-387635 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (337.52s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-387635 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.30.3
E0812 11:28:43.738081   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/functional-470148/client.crt: no such file or directory
E0812 11:28:44.475152   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/gvisor-135153/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-387635 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.30.3: (5m37.134413963s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-387635 -n embed-certs-387635
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (337.52s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (110.89s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-967148 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.30.3
E0812 11:30:06.396372   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/gvisor-135153/client.crt: no such file or directory
E0812 11:30:16.594281   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/addons-705597/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-967148 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.30.3: (1m50.889495438s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (110.89s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-967148 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [8316dc31-b0e0-41e9-bb6a-9c497c49c094] Pending
helpers_test.go:344: "busybox" [8316dc31-b0e0-41e9-bb6a-9c497c49c094] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [8316dc31-b0e0-41e9-bb6a-9c497c49c094] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.006521523s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-967148 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-967148 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-967148 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.042940819s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-967148 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (13.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-967148 --alsologtostderr -v=3
E0812 11:31:43.040619   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/skaffold-628488/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-967148 --alsologtostderr -v=3: (13.352257449s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (13.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-967148 -n default-k8s-diff-port-967148
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-967148 -n default-k8s-diff-port-967148: exit status 7 (76.148308ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-967148 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (320.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-967148 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.30.3
E0812 11:32:22.551535   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/gvisor-135153/client.crt: no such file or directory
E0812 11:32:50.237506   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/gvisor-135153/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-967148 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.30.3: (5m19.95950492s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-967148 -n default-k8s-diff-port-967148
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (320.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-5tz58" [b5807195-732e-49d7-938d-98729a7bcd26] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-695b96c756-5tz58" [b5807195-732e-49d7-938d-98729a7bcd26] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005882633s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-5tz58" [b5807195-732e-49d7-938d-98729a7bcd26] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005855642s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-208795 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-208795 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.74s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-208795 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-208795 -n no-preload-208795
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-208795 -n no-preload-208795: exit status 2 (270.975226ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-208795 -n no-preload-208795
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-208795 -n no-preload-208795: exit status 2 (260.46581ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-208795 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-208795 -n no-preload-208795
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-208795 -n no-preload-208795
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.74s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (72.04s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-622555 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.31.0-rc.0
E0812 11:33:43.738594   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/functional-470148/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-622555 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.31.0-rc.0: (1m12.036142521s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (72.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (16.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-47mpb" [0527e3b9-4052-4885-b9dc-097461ec167d] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-779776cb65-47mpb" [0527e3b9-4052-4885-b9dc-097461ec167d] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 16.006761168s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (16.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-622555 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (8.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-622555 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-622555 --alsologtostderr -v=3: (8.35607791s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (8.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-47mpb" [0527e3b9-4052-4885-b9dc-097461ec167d] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006253956s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-387635 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-622555 -n newest-cni-622555
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-622555 -n newest-cni-622555: exit status 7 (77.758947ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-622555 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (44.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-622555 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.31.0-rc.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-622555 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.31.0-rc.0: (43.821236513s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-622555 -n newest-cni-622555
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (44.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-387635 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.9s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-387635 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-387635 -n embed-certs-387635
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-387635 -n embed-certs-387635: exit status 2 (280.892175ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-387635 -n embed-certs-387635
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-387635 -n embed-certs-387635: exit status 2 (269.955297ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-387635 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-387635 -n embed-certs-387635
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-387635 -n embed-certs-387635
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (134.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-637626 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 
E0812 11:35:16.594520   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/addons-705597/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-637626 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 : (2m14.191171767s)
--- PASS: TestNetworkPlugins/group/auto/Start (134.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-622555 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.57s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-622555 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-622555 -n newest-cni-622555
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-622555 -n newest-cni-622555: exit status 2 (263.520878ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-622555 -n newest-cni-622555
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-622555 -n newest-cni-622555: exit status 2 (277.989193ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-622555 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-622555 -n newest-cni-622555
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-622555 -n newest-cni-622555
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (105.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-637626 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 
E0812 11:36:39.642722   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/addons-705597/client.crt: no such file or directory
E0812 11:36:43.040995   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/skaffold-628488/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-637626 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 : (1m45.303312567s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (105.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-637626 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-637626 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-t899k" [fdcf9f93-dd9f-4e81-bc8c-bd687b7b582d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-t899k" [fdcf9f93-dd9f-4e81-bc8c-bd687b7b582d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.004723449s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-txkld" [fd037edc-5f59-41ee-b2b5-f9bd521d0d3f] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.010049832s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-j6rs9" [61ae39d1-ec77-406c-a83e-a971489b308e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005945463s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-637626 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-637626 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-637626 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-txkld" [fd037edc-5f59-41ee-b2b5-f9bd521d0d3f] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005381949s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-967148 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-637626 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-mjz84" [f901cab0-8f5b-4025-a3a6-80fb02e6d9d1] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00599164s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (13.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-637626 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-gvrbx" [70050800-e28e-4b3c-adc5-f65f2d2705a3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-gvrbx" [70050800-e28e-4b3c-adc5-f65f2d2705a3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 13.00741551s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (13.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-967148 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-967148 --alsologtostderr -v=1
E0812 11:37:19.242203   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/no-preload-208795/client.crt: no such file or directory
E0812 11:37:19.247543   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/no-preload-208795/client.crt: no such file or directory
E0812 11:37:19.257850   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/no-preload-208795/client.crt: no such file or directory
E0812 11:37:19.278966   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/no-preload-208795/client.crt: no such file or directory
E0812 11:37:19.319135   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/no-preload-208795/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-967148 -n default-k8s-diff-port-967148
E0812 11:37:19.399563   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/no-preload-208795/client.crt: no such file or directory
E0812 11:37:19.560698   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/no-preload-208795/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-967148 -n default-k8s-diff-port-967148: exit status 2 (302.102003ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-967148 -n default-k8s-diff-port-967148
E0812 11:37:19.881632   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/no-preload-208795/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-967148 -n default-k8s-diff-port-967148: exit status 2 (288.492319ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-967148 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-967148 -n default-k8s-diff-port-967148
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-967148 -n default-k8s-diff-port-967148
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-mjz84" [f901cab0-8f5b-4025-a3a6-80fb02e6d9d1] Running
E0812 11:37:20.522201   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/no-preload-208795/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011154837s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-679119 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (106.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-637626 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 
E0812 11:37:24.363696   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/no-preload-208795/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-637626 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 : (1m46.441480566s)
--- PASS: TestNetworkPlugins/group/calico/Start (106.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-679119 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-679119 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-679119 -n old-k8s-version-679119
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-679119 -n old-k8s-version-679119: exit status 2 (294.996577ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-679119 -n old-k8s-version-679119
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-679119 -n old-k8s-version-679119: exit status 2 (283.763643ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-679119 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-679119 -n old-k8s-version-679119
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-679119 -n old-k8s-version-679119
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.07s)
E0812 11:40:49.393156   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/old-k8s-version-679119/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (114.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-637626 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-637626 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 : (1m54.025855361s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (114.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-637626 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-637626 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-637626 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (134.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-637626 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 
E0812 11:37:39.724802   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/no-preload-208795/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-637626 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 : (2m14.388657571s)
--- PASS: TestNetworkPlugins/group/false/Start (134.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (149.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-637626 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 
E0812 11:38:00.206016   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/no-preload-208795/client.crt: no such file or directory
E0812 11:38:05.548225   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/old-k8s-version-679119/client.crt: no such file or directory
E0812 11:38:05.553687   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/old-k8s-version-679119/client.crt: no such file or directory
E0812 11:38:05.564070   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/old-k8s-version-679119/client.crt: no such file or directory
E0812 11:38:05.584858   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/old-k8s-version-679119/client.crt: no such file or directory
E0812 11:38:05.625665   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/old-k8s-version-679119/client.crt: no such file or directory
E0812 11:38:05.706123   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/old-k8s-version-679119/client.crt: no such file or directory
E0812 11:38:05.866669   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/old-k8s-version-679119/client.crt: no such file or directory
E0812 11:38:06.086671   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/skaffold-628488/client.crt: no such file or directory
E0812 11:38:06.187589   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/old-k8s-version-679119/client.crt: no such file or directory
E0812 11:38:06.828117   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/old-k8s-version-679119/client.crt: no such file or directory
E0812 11:38:08.109091   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/old-k8s-version-679119/client.crt: no such file or directory
E0812 11:38:10.670065   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/old-k8s-version-679119/client.crt: no such file or directory
E0812 11:38:15.790778   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/old-k8s-version-679119/client.crt: no such file or directory
E0812 11:38:26.031069   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/old-k8s-version-679119/client.crt: no such file or directory
E0812 11:38:41.167137   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/no-preload-208795/client.crt: no such file or directory
E0812 11:38:43.737991   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/functional-470148/client.crt: no such file or directory
E0812 11:38:46.511876   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/old-k8s-version-679119/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-637626 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 : (2m29.84556284s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (149.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-fxbds" [8f51503a-b660-4ebd-8509-7ac74b53ef10] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.007168711s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-637626 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-637626 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-l25w9" [0f11d146-0433-4b21-ad58-9ae457fd45d6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-l25w9" [0f11d146-0433-4b21-ad58-9ae457fd45d6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.006219871s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-637626 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-637626 replace --force -f testdata/netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context custom-flannel-637626 replace --force -f testdata/netcat-deployment.yaml: (1.135502402s)
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-552m8" [50819dfc-ee61-41c5-a828-def8d5fe90d9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-552m8" [50819dfc-ee61-41c5-a828-def8d5fe90d9] Running
E0812 11:39:27.472639   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/old-k8s-version-679119/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.122309936s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-637626 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-637626 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-637626 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-637626 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-637626 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-637626 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-637626 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (13.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-637626 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-7hmpv" [bbc5333a-86e8-4954-94ad-25a178833bc6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-7hmpv" [bbc5333a-86e8-4954-94ad-25a178833bc6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 13.005632449s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (13.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (87.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-637626 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-637626 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 : (1m27.869422201s)
--- PASS: TestNetworkPlugins/group/flannel/Start (87.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (135.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-637626 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-637626 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 : (2m15.412962091s)
--- PASS: TestNetworkPlugins/group/bridge/Start (135.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-637626 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-637626 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-637626 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-637626 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-637626 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-ksvvk" [a6932222-f9df-45c8-8b5b-5a81a580e0c4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-ksvvk" [a6932222-f9df-45c8-8b5b-5a81a580e0c4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.006097498s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (140.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-637626 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 
E0812 11:40:16.594659   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/addons-705597/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-637626 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 : (2m20.379038146s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (140.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-637626 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-637626 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-637626 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-ctxwm" [87669b31-9659-47d3-b6bf-763324e5148b] Running
E0812 11:41:21.890309   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/default-k8s-diff-port-967148/client.crt: no such file or directory
E0812 11:41:21.895903   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/default-k8s-diff-port-967148/client.crt: no such file or directory
E0812 11:41:21.906343   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/default-k8s-diff-port-967148/client.crt: no such file or directory
E0812 11:41:21.926740   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/default-k8s-diff-port-967148/client.crt: no such file or directory
E0812 11:41:21.967296   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/default-k8s-diff-port-967148/client.crt: no such file or directory
E0812 11:41:22.047748   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/default-k8s-diff-port-967148/client.crt: no such file or directory
E0812 11:41:22.208101   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/default-k8s-diff-port-967148/client.crt: no such file or directory
E0812 11:41:22.528346   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/default-k8s-diff-port-967148/client.crt: no such file or directory
E0812 11:41:23.169421   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/default-k8s-diff-port-967148/client.crt: no such file or directory
E0812 11:41:24.450200   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/default-k8s-diff-port-967148/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.01575102s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-637626 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-637626 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-qnx7q" [862cf18b-6256-4c0d-bec8-5426e58ff9cb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0812 11:41:27.010853   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/default-k8s-diff-port-967148/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-qnx7q" [862cf18b-6256-4c0d-bec8-5426e58ff9cb] Running
E0812 11:41:32.131652   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/default-k8s-diff-port-967148/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.007110879s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-637626 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-637626 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-637626 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-637626 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-637626 replace --force -f testdata/netcat-deployment.yaml
E0812 11:42:10.349052   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/kindnet-637626/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-wfjhl" [e8205804-4437-4984-9578-dd0c39b8045a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0812 11:42:12.909334   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/kindnet-637626/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-wfjhl" [e8205804-4437-4984-9578-dd0c39b8045a] Running
E0812 11:42:16.722466   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/auto-637626/client.crt: no such file or directory
E0812 11:42:18.029881   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/kindnet-637626/client.crt: no such file or directory
E0812 11:42:19.241370   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/no-preload-208795/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004931197s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-637626 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-637626 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-637626 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-637626 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (10.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-637626 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-w8sg5" [3ee63174-0283-490d-b8a7-5e7592505516] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0812 11:42:37.202862   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/auto-637626/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-w8sg5" [3ee63174-0283-490d-b8a7-5e7592505516] Running
E0812 11:42:43.814386   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/default-k8s-diff-port-967148/client.crt: no such file or directory
E0812 11:42:46.928283   10968 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/no-preload-208795/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 10.004162062s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (10.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-637626 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-637626 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-637626 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.13s)

                                                
                                    

Test skip (34/349)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.30.3/cached-images 0
15 TestDownloadOnly/v1.30.3/binaries 0
16 TestDownloadOnly/v1.30.3/kubectl 0
23 TestDownloadOnly/v1.31.0-rc.0/cached-images 0
24 TestDownloadOnly/v1.31.0-rc.0/binaries 0
25 TestDownloadOnly/v1.31.0-rc.0/kubectl 0
29 TestDownloadOnlyKic 0
47 TestAddons/parallel/Olm 0
60 TestDockerEnvContainerd 0
62 TestHyperKitDriverInstallOrUpdate 0
63 TestHyperkitDriverSkipUpgrade 0
115 TestFunctional/parallel/PodmanEnv 0
145 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
146 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
147 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
148 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
149 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
150 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
151 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
152 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
193 TestImageBuild/serial/validateImageBuildWithBuildEnv 0
220 TestKicCustomNetwork 0
221 TestKicExistingNetwork 0
222 TestKicCustomSubnet 0
223 TestKicStaticIP 0
255 TestChangeNoneUser 0
258 TestScheduledStopWindows 0
262 TestInsufficientStorage 0
266 TestMissingContainerUpgrade 0
274 TestStartStop/group/disable-driver-mounts 0.16
292 TestNetworkPlugins/group/cilium 4.19
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-274388" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-274388
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-637626 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-637626

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-637626

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-637626

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-637626

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-637626

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-637626

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-637626

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-637626

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-637626

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-637626

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-637626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637626"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-637626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637626"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-637626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637626"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-637626

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-637626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637626"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-637626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637626"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-637626" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-637626" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-637626" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-637626" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-637626" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-637626" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-637626" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-637626" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-637626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637626"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-637626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637626"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-637626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637626"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-637626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637626"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-637626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637626"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-637626

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-637626

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-637626" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-637626" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-637626

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-637626

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-637626" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-637626" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-637626" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-637626" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-637626" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-637626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637626"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-637626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637626"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-637626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637626"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-637626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637626"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-637626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637626"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19409-3796/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 12 Aug 2024 11:20:18 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.39.240:8443
name: pause-342687
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19409-3796/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 12 Aug 2024 11:20:41 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.72.144:8443
name: running-upgrade-453798
contexts:
- context:
cluster: pause-342687
extensions:
- extension:
last-update: Mon, 12 Aug 2024 11:20:18 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: pause-342687
name: pause-342687
- context:
cluster: running-upgrade-453798
extensions:
- extension:
last-update: Mon, 12 Aug 2024 11:20:41 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: running-upgrade-453798
name: running-upgrade-453798
current-context: running-upgrade-453798
kind: Config
preferences: {}
users:
- name: pause-342687
user:
client-certificate: /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/pause-342687/client.crt
client-key: /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/pause-342687/client.key
- name: running-upgrade-453798
user:
client-certificate: /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/running-upgrade-453798/client.crt
client-key: /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/running-upgrade-453798/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-637626

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-637626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637626"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-637626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637626"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-637626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637626"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-637626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637626"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-637626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637626"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-637626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637626"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-637626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637626"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-637626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637626"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-637626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637626"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-637626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637626"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-637626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637626"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-637626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637626"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-637626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637626"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-637626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637626"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-637626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637626"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-637626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637626"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-637626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637626"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-637626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637626"

                                                
                                                
----------------------- debugLogs end: cilium-637626 [took: 4.003716085s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-637626" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-637626
--- SKIP: TestNetworkPlugins/group/cilium (4.19s)

                                                
                                    
Copied to clipboard