Test Report: KVM_Linux 19199

                    
                      50cd99089b98d3ac0f2f64a84f76c9502bf70799:2024-07-09:35253
                    
                

Test fail (1/341)

Order failed test Duration
30 TestAddons/parallel/Ingress 483.07
x
+
TestAddons/parallel/Ingress (483.07s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-470383 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-470383 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-470383 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:247: (dbg) Non-zero exit: kubectl --context addons-470383 replace --force -f testdata/nginx-pod-svc.yaml: exit status 1 (555.279393ms)

                                                
                                                
-- stdout --
	service/nginx replaced

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (InternalError): Internal error occurred: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": service "volcano-admission-service" not found

                                                
                                                
** /stderr **
addons_test.go:249: failed to kubectl replace nginx-pod-svc. args "kubectl --context addons-470383 replace --force -f testdata/nginx-pod-svc.yaml". exit status 1
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:329: TestAddons/parallel/Ingress: WARNING: pod list for "default" "run=nginx" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:252: ***** TestAddons/parallel/Ingress: pod "run=nginx" failed to start within 8m0s: context deadline exceeded ****
addons_test.go:252: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-470383 -n addons-470383
addons_test.go:252: TestAddons/parallel/Ingress: showing logs for failed pods as of 2024-07-09 16:49:13.861421786 +0000 UTC m=+768.661112983
addons_test.go:253: failed waiting for ngnix pod: run=nginx within 8m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-470383 -n addons-470383
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-470383 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-470383 logs -n 25: (1.016136184s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube             | jenkins | v1.33.1 | 09 Jul 24 16:36 UTC | 09 Jul 24 16:36 UTC |
	| delete  | -p download-only-894250                                                                     | download-only-894250 | jenkins | v1.33.1 | 09 Jul 24 16:36 UTC | 09 Jul 24 16:36 UTC |
	| start   | -o=json --download-only                                                                     | download-only-574622 | jenkins | v1.33.1 | 09 Jul 24 16:36 UTC |                     |
	|         | -p download-only-574622                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.33.1 | 09 Jul 24 16:36 UTC | 09 Jul 24 16:36 UTC |
	| delete  | -p download-only-574622                                                                     | download-only-574622 | jenkins | v1.33.1 | 09 Jul 24 16:36 UTC | 09 Jul 24 16:36 UTC |
	| delete  | -p download-only-894250                                                                     | download-only-894250 | jenkins | v1.33.1 | 09 Jul 24 16:36 UTC | 09 Jul 24 16:36 UTC |
	| delete  | -p download-only-574622                                                                     | download-only-574622 | jenkins | v1.33.1 | 09 Jul 24 16:36 UTC | 09 Jul 24 16:36 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-675753 | jenkins | v1.33.1 | 09 Jul 24 16:36 UTC |                     |
	|         | binary-mirror-675753                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:44415                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-675753                                                                     | binary-mirror-675753 | jenkins | v1.33.1 | 09 Jul 24 16:36 UTC | 09 Jul 24 16:36 UTC |
	| addons  | disable dashboard -p                                                                        | addons-470383        | jenkins | v1.33.1 | 09 Jul 24 16:36 UTC |                     |
	|         | addons-470383                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-470383        | jenkins | v1.33.1 | 09 Jul 24 16:36 UTC |                     |
	|         | addons-470383                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-470383 --wait=true                                                                | addons-470383        | jenkins | v1.33.1 | 09 Jul 24 16:36 UTC | 09 Jul 24 16:40 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2  --addons=ingress                                                             |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-470383        | jenkins | v1.33.1 | 09 Jul 24 16:40 UTC | 09 Jul 24 16:40 UTC |
	|         | -p addons-470383                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-470383        | jenkins | v1.33.1 | 09 Jul 24 16:40 UTC | 09 Jul 24 16:40 UTC |
	|         | -p addons-470383                                                                            |                      |         |         |                     |                     |
	| addons  | addons-470383 addons disable                                                                | addons-470383        | jenkins | v1.33.1 | 09 Jul 24 16:40 UTC | 09 Jul 24 16:40 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-470383 ip                                                                            | addons-470383        | jenkins | v1.33.1 | 09 Jul 24 16:40 UTC | 09 Jul 24 16:40 UTC |
	| addons  | addons-470383 addons disable                                                                | addons-470383        | jenkins | v1.33.1 | 09 Jul 24 16:40 UTC | 09 Jul 24 16:40 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-470383        | jenkins | v1.33.1 | 09 Jul 24 16:40 UTC | 09 Jul 24 16:40 UTC |
	|         | addons-470383                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-470383 ssh cat                                                                       | addons-470383        | jenkins | v1.33.1 | 09 Jul 24 16:40 UTC | 09 Jul 24 16:40 UTC |
	|         | /opt/local-path-provisioner/pvc-35f374b4-91fa-495b-8ec2-70cb471acc67_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-470383 addons disable                                                                | addons-470383        | jenkins | v1.33.1 | 09 Jul 24 16:40 UTC | 09 Jul 24 16:41 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-470383        | jenkins | v1.33.1 | 09 Jul 24 16:40 UTC | 09 Jul 24 16:41 UTC |
	|         | addons-470383                                                                               |                      |         |         |                     |                     |
	| addons  | addons-470383 addons                                                                        | addons-470383        | jenkins | v1.33.1 | 09 Jul 24 16:41 UTC | 09 Jul 24 16:41 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-470383 addons disable                                                                | addons-470383        | jenkins | v1.33.1 | 09 Jul 24 16:41 UTC | 09 Jul 24 16:41 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-470383 addons                                                                        | addons-470383        | jenkins | v1.33.1 | 09 Jul 24 16:41 UTC | 09 Jul 24 16:41 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-470383 addons                                                                        | addons-470383        | jenkins | v1.33.1 | 09 Jul 24 16:41 UTC | 09 Jul 24 16:41 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/09 16:36:42
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0709 16:36:42.033062   15338 out.go:291] Setting OutFile to fd 1 ...
	I0709 16:36:42.033155   15338 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0709 16:36:42.033162   15338 out.go:304] Setting ErrFile to fd 2...
	I0709 16:36:42.033166   15338 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0709 16:36:42.033326   15338 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19199-7540/.minikube/bin
	I0709 16:36:42.033923   15338 out.go:298] Setting JSON to false
	I0709 16:36:42.034685   15338 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":1143,"bootTime":1720541859,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0709 16:36:42.034741   15338 start.go:139] virtualization: kvm guest
	I0709 16:36:42.036785   15338 out.go:177] * [addons-470383] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0709 16:36:42.038091   15338 out.go:177]   - MINIKUBE_LOCATION=19199
	I0709 16:36:42.038088   15338 notify.go:220] Checking for updates...
	I0709 16:36:42.039375   15338 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0709 16:36:42.040531   15338 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19199-7540/kubeconfig
	I0709 16:36:42.041737   15338 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19199-7540/.minikube
	I0709 16:36:42.042973   15338 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0709 16:36:42.044122   15338 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0709 16:36:42.045409   15338 driver.go:392] Setting default libvirt URI to qemu:///system
	I0709 16:36:42.078374   15338 out.go:177] * Using the kvm2 driver based on user configuration
	I0709 16:36:42.079600   15338 start.go:297] selected driver: kvm2
	I0709 16:36:42.079610   15338 start.go:901] validating driver "kvm2" against <nil>
	I0709 16:36:42.079622   15338 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0709 16:36:42.080318   15338 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0709 16:36:42.080408   15338 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19199-7540/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0709 16:36:42.094547   15338 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0709 16:36:42.094590   15338 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0709 16:36:42.094774   15338 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0709 16:36:42.094833   15338 cni.go:84] Creating CNI manager for ""
	I0709 16:36:42.094848   15338 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0709 16:36:42.094858   15338 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0709 16:36:42.094911   15338 start.go:340] cluster config:
	{Name:addons-470383 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720534588-19199@sha256:b4b7a193d4d5ddc3a5becbbd3489eb6d587f98b5654dfee6a583e3346dfa913d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:addons-470383 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:d
ocker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0709 16:36:42.095000   15338 iso.go:125] acquiring lock: {Name:mk93d6a6f33561e26ce93d6660cdedc1d654228a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0709 16:36:42.096569   15338 out.go:177] * Starting "addons-470383" primary control-plane node in "addons-470383" cluster
	I0709 16:36:42.097723   15338 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0709 16:36:42.097757   15338 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19199-7540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0709 16:36:42.097764   15338 cache.go:56] Caching tarball of preloaded images
	I0709 16:36:42.097820   15338 preload.go:173] Found /home/jenkins/minikube-integration/19199-7540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0709 16:36:42.097830   15338 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0709 16:36:42.098096   15338 profile.go:143] Saving config to /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/addons-470383/config.json ...
	I0709 16:36:42.098113   15338 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/addons-470383/config.json: {Name:mk9303ad228e575c453d2e7133257a0274cfee22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 16:36:42.098231   15338 start.go:360] acquireMachinesLock for addons-470383: {Name:mk7ada9c404a215658f7a860c4bdb410f118a14e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0709 16:36:42.098275   15338 start.go:364] duration metric: took 31.465µs to acquireMachinesLock for "addons-470383"
	I0709 16:36:42.098297   15338 start.go:93] Provisioning new machine with config: &{Name:addons-470383 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19199/minikube-v1.33.1-1720433170-19199-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720534588-19199@sha256:b4b7a193d4d5ddc3a5becbbd3489eb6d587f98b5654dfee6a583e3346dfa913d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.2 ClusterName:addons-470383 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0709 16:36:42.098348   15338 start.go:125] createHost starting for "" (driver="kvm2")
	I0709 16:36:42.099971   15338 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0709 16:36:42.100076   15338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0709 16:36:42.100112   15338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0709 16:36:42.113601   15338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45097
	I0709 16:36:42.113998   15338 main.go:141] libmachine: () Calling .GetVersion
	I0709 16:36:42.114469   15338 main.go:141] libmachine: Using API Version  1
	I0709 16:36:42.114491   15338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0709 16:36:42.114855   15338 main.go:141] libmachine: () Calling .GetMachineName
	I0709 16:36:42.115040   15338 main.go:141] libmachine: (addons-470383) Calling .GetMachineName
	I0709 16:36:42.115163   15338 main.go:141] libmachine: (addons-470383) Calling .DriverName
	I0709 16:36:42.115306   15338 start.go:159] libmachine.API.Create for "addons-470383" (driver="kvm2")
	I0709 16:36:42.115332   15338 client.go:168] LocalClient.Create starting
	I0709 16:36:42.115372   15338 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19199-7540/.minikube/certs/ca.pem
	I0709 16:36:42.232953   15338 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19199-7540/.minikube/certs/cert.pem
	I0709 16:36:42.311659   15338 main.go:141] libmachine: Running pre-create checks...
	I0709 16:36:42.311682   15338 main.go:141] libmachine: (addons-470383) Calling .PreCreateCheck
	I0709 16:36:42.312165   15338 main.go:141] libmachine: (addons-470383) Calling .GetConfigRaw
	I0709 16:36:42.312648   15338 main.go:141] libmachine: Creating machine...
	I0709 16:36:42.312663   15338 main.go:141] libmachine: (addons-470383) Calling .Create
	I0709 16:36:42.312821   15338 main.go:141] libmachine: (addons-470383) Creating KVM machine...
	I0709 16:36:42.314070   15338 main.go:141] libmachine: (addons-470383) DBG | found existing default KVM network
	I0709 16:36:42.314785   15338 main.go:141] libmachine: (addons-470383) DBG | I0709 16:36:42.314635   15360 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015330}
	I0709 16:36:42.314823   15338 main.go:141] libmachine: (addons-470383) DBG | created network xml: 
	I0709 16:36:42.314848   15338 main.go:141] libmachine: (addons-470383) DBG | <network>
	I0709 16:36:42.314861   15338 main.go:141] libmachine: (addons-470383) DBG |   <name>mk-addons-470383</name>
	I0709 16:36:42.314878   15338 main.go:141] libmachine: (addons-470383) DBG |   <dns enable='no'/>
	I0709 16:36:42.314891   15338 main.go:141] libmachine: (addons-470383) DBG |   
	I0709 16:36:42.314903   15338 main.go:141] libmachine: (addons-470383) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0709 16:36:42.314909   15338 main.go:141] libmachine: (addons-470383) DBG |     <dhcp>
	I0709 16:36:42.314918   15338 main.go:141] libmachine: (addons-470383) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0709 16:36:42.314926   15338 main.go:141] libmachine: (addons-470383) DBG |     </dhcp>
	I0709 16:36:42.314934   15338 main.go:141] libmachine: (addons-470383) DBG |   </ip>
	I0709 16:36:42.314941   15338 main.go:141] libmachine: (addons-470383) DBG |   
	I0709 16:36:42.314949   15338 main.go:141] libmachine: (addons-470383) DBG | </network>
	I0709 16:36:42.314978   15338 main.go:141] libmachine: (addons-470383) DBG | 
	I0709 16:36:42.320170   15338 main.go:141] libmachine: (addons-470383) DBG | trying to create private KVM network mk-addons-470383 192.168.39.0/24...
	I0709 16:36:42.381053   15338 main.go:141] libmachine: (addons-470383) DBG | private KVM network mk-addons-470383 192.168.39.0/24 created
	I0709 16:36:42.381095   15338 main.go:141] libmachine: (addons-470383) Setting up store path in /home/jenkins/minikube-integration/19199-7540/.minikube/machines/addons-470383 ...
	I0709 16:36:42.381115   15338 main.go:141] libmachine: (addons-470383) DBG | I0709 16:36:42.381041   15360 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19199-7540/.minikube
	I0709 16:36:42.381122   15338 main.go:141] libmachine: (addons-470383) Building disk image from file:///home/jenkins/minikube-integration/19199-7540/.minikube/cache/iso/amd64/minikube-v1.33.1-1720433170-19199-amd64.iso
	I0709 16:36:42.381215   15338 main.go:141] libmachine: (addons-470383) Downloading /home/jenkins/minikube-integration/19199-7540/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19199-7540/.minikube/cache/iso/amd64/minikube-v1.33.1-1720433170-19199-amd64.iso...
	I0709 16:36:42.631435   15338 main.go:141] libmachine: (addons-470383) DBG | I0709 16:36:42.631311   15360 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19199-7540/.minikube/machines/addons-470383/id_rsa...
	I0709 16:36:42.886137   15338 main.go:141] libmachine: (addons-470383) DBG | I0709 16:36:42.885979   15360 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19199-7540/.minikube/machines/addons-470383/addons-470383.rawdisk...
	I0709 16:36:42.886182   15338 main.go:141] libmachine: (addons-470383) DBG | Writing magic tar header
	I0709 16:36:42.886254   15338 main.go:141] libmachine: (addons-470383) DBG | Writing SSH key tar header
	I0709 16:36:42.886318   15338 main.go:141] libmachine: (addons-470383) DBG | I0709 16:36:42.886094   15360 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19199-7540/.minikube/machines/addons-470383 ...
	I0709 16:36:42.886340   15338 main.go:141] libmachine: (addons-470383) Setting executable bit set on /home/jenkins/minikube-integration/19199-7540/.minikube/machines/addons-470383 (perms=drwx------)
	I0709 16:36:42.886350   15338 main.go:141] libmachine: (addons-470383) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19199-7540/.minikube/machines/addons-470383
	I0709 16:36:42.886364   15338 main.go:141] libmachine: (addons-470383) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19199-7540/.minikube/machines
	I0709 16:36:42.886372   15338 main.go:141] libmachine: (addons-470383) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19199-7540/.minikube
	I0709 16:36:42.886381   15338 main.go:141] libmachine: (addons-470383) Setting executable bit set on /home/jenkins/minikube-integration/19199-7540/.minikube/machines (perms=drwxr-xr-x)
	I0709 16:36:42.886389   15338 main.go:141] libmachine: (addons-470383) Setting executable bit set on /home/jenkins/minikube-integration/19199-7540/.minikube (perms=drwxr-xr-x)
	I0709 16:36:42.886397   15338 main.go:141] libmachine: (addons-470383) Setting executable bit set on /home/jenkins/minikube-integration/19199-7540 (perms=drwxrwxr-x)
	I0709 16:36:42.886405   15338 main.go:141] libmachine: (addons-470383) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0709 16:36:42.886415   15338 main.go:141] libmachine: (addons-470383) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0709 16:36:42.886422   15338 main.go:141] libmachine: (addons-470383) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19199-7540
	I0709 16:36:42.886427   15338 main.go:141] libmachine: (addons-470383) Creating domain...
	I0709 16:36:42.886438   15338 main.go:141] libmachine: (addons-470383) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0709 16:36:42.886449   15338 main.go:141] libmachine: (addons-470383) DBG | Checking permissions on dir: /home/jenkins
	I0709 16:36:42.886466   15338 main.go:141] libmachine: (addons-470383) DBG | Checking permissions on dir: /home
	I0709 16:36:42.886478   15338 main.go:141] libmachine: (addons-470383) DBG | Skipping /home - not owner
	I0709 16:36:42.887355   15338 main.go:141] libmachine: (addons-470383) define libvirt domain using xml: 
	I0709 16:36:42.887376   15338 main.go:141] libmachine: (addons-470383) <domain type='kvm'>
	I0709 16:36:42.887382   15338 main.go:141] libmachine: (addons-470383)   <name>addons-470383</name>
	I0709 16:36:42.887388   15338 main.go:141] libmachine: (addons-470383)   <memory unit='MiB'>4000</memory>
	I0709 16:36:42.887393   15338 main.go:141] libmachine: (addons-470383)   <vcpu>2</vcpu>
	I0709 16:36:42.887397   15338 main.go:141] libmachine: (addons-470383)   <features>
	I0709 16:36:42.887401   15338 main.go:141] libmachine: (addons-470383)     <acpi/>
	I0709 16:36:42.887412   15338 main.go:141] libmachine: (addons-470383)     <apic/>
	I0709 16:36:42.887417   15338 main.go:141] libmachine: (addons-470383)     <pae/>
	I0709 16:36:42.887422   15338 main.go:141] libmachine: (addons-470383)     
	I0709 16:36:42.887427   15338 main.go:141] libmachine: (addons-470383)   </features>
	I0709 16:36:42.887431   15338 main.go:141] libmachine: (addons-470383)   <cpu mode='host-passthrough'>
	I0709 16:36:42.887436   15338 main.go:141] libmachine: (addons-470383)   
	I0709 16:36:42.887444   15338 main.go:141] libmachine: (addons-470383)   </cpu>
	I0709 16:36:42.887449   15338 main.go:141] libmachine: (addons-470383)   <os>
	I0709 16:36:42.887457   15338 main.go:141] libmachine: (addons-470383)     <type>hvm</type>
	I0709 16:36:42.887462   15338 main.go:141] libmachine: (addons-470383)     <boot dev='cdrom'/>
	I0709 16:36:42.887467   15338 main.go:141] libmachine: (addons-470383)     <boot dev='hd'/>
	I0709 16:36:42.887472   15338 main.go:141] libmachine: (addons-470383)     <bootmenu enable='no'/>
	I0709 16:36:42.887489   15338 main.go:141] libmachine: (addons-470383)   </os>
	I0709 16:36:42.887494   15338 main.go:141] libmachine: (addons-470383)   <devices>
	I0709 16:36:42.887504   15338 main.go:141] libmachine: (addons-470383)     <disk type='file' device='cdrom'>
	I0709 16:36:42.887513   15338 main.go:141] libmachine: (addons-470383)       <source file='/home/jenkins/minikube-integration/19199-7540/.minikube/machines/addons-470383/boot2docker.iso'/>
	I0709 16:36:42.887520   15338 main.go:141] libmachine: (addons-470383)       <target dev='hdc' bus='scsi'/>
	I0709 16:36:42.887525   15338 main.go:141] libmachine: (addons-470383)       <readonly/>
	I0709 16:36:42.887529   15338 main.go:141] libmachine: (addons-470383)     </disk>
	I0709 16:36:42.887535   15338 main.go:141] libmachine: (addons-470383)     <disk type='file' device='disk'>
	I0709 16:36:42.887542   15338 main.go:141] libmachine: (addons-470383)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0709 16:36:42.887550   15338 main.go:141] libmachine: (addons-470383)       <source file='/home/jenkins/minikube-integration/19199-7540/.minikube/machines/addons-470383/addons-470383.rawdisk'/>
	I0709 16:36:42.887555   15338 main.go:141] libmachine: (addons-470383)       <target dev='hda' bus='virtio'/>
	I0709 16:36:42.887561   15338 main.go:141] libmachine: (addons-470383)     </disk>
	I0709 16:36:42.887568   15338 main.go:141] libmachine: (addons-470383)     <interface type='network'>
	I0709 16:36:42.887574   15338 main.go:141] libmachine: (addons-470383)       <source network='mk-addons-470383'/>
	I0709 16:36:42.887580   15338 main.go:141] libmachine: (addons-470383)       <model type='virtio'/>
	I0709 16:36:42.887586   15338 main.go:141] libmachine: (addons-470383)     </interface>
	I0709 16:36:42.887596   15338 main.go:141] libmachine: (addons-470383)     <interface type='network'>
	I0709 16:36:42.887602   15338 main.go:141] libmachine: (addons-470383)       <source network='default'/>
	I0709 16:36:42.887607   15338 main.go:141] libmachine: (addons-470383)       <model type='virtio'/>
	I0709 16:36:42.887612   15338 main.go:141] libmachine: (addons-470383)     </interface>
	I0709 16:36:42.887618   15338 main.go:141] libmachine: (addons-470383)     <serial type='pty'>
	I0709 16:36:42.887623   15338 main.go:141] libmachine: (addons-470383)       <target port='0'/>
	I0709 16:36:42.887629   15338 main.go:141] libmachine: (addons-470383)     </serial>
	I0709 16:36:42.887634   15338 main.go:141] libmachine: (addons-470383)     <console type='pty'>
	I0709 16:36:42.887646   15338 main.go:141] libmachine: (addons-470383)       <target type='serial' port='0'/>
	I0709 16:36:42.887653   15338 main.go:141] libmachine: (addons-470383)     </console>
	I0709 16:36:42.887658   15338 main.go:141] libmachine: (addons-470383)     <rng model='virtio'>
	I0709 16:36:42.887666   15338 main.go:141] libmachine: (addons-470383)       <backend model='random'>/dev/random</backend>
	I0709 16:36:42.887670   15338 main.go:141] libmachine: (addons-470383)     </rng>
	I0709 16:36:42.887700   15338 main.go:141] libmachine: (addons-470383)     
	I0709 16:36:42.887720   15338 main.go:141] libmachine: (addons-470383)     
	I0709 16:36:42.887727   15338 main.go:141] libmachine: (addons-470383)   </devices>
	I0709 16:36:42.887732   15338 main.go:141] libmachine: (addons-470383) </domain>
	I0709 16:36:42.887743   15338 main.go:141] libmachine: (addons-470383) 
	I0709 16:36:42.893711   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined MAC address 52:54:00:be:f6:f7 in network default
	I0709 16:36:42.894208   15338 main.go:141] libmachine: (addons-470383) Ensuring networks are active...
	I0709 16:36:42.894227   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:36:42.894889   15338 main.go:141] libmachine: (addons-470383) Ensuring network default is active
	I0709 16:36:42.895165   15338 main.go:141] libmachine: (addons-470383) Ensuring network mk-addons-470383 is active
	I0709 16:36:42.895724   15338 main.go:141] libmachine: (addons-470383) Getting domain xml...
	I0709 16:36:42.896387   15338 main.go:141] libmachine: (addons-470383) Creating domain...
	I0709 16:36:44.257604   15338 main.go:141] libmachine: (addons-470383) Waiting to get IP...
	I0709 16:36:44.258382   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:36:44.258788   15338 main.go:141] libmachine: (addons-470383) DBG | unable to find current IP address of domain addons-470383 in network mk-addons-470383
	I0709 16:36:44.258830   15338 main.go:141] libmachine: (addons-470383) DBG | I0709 16:36:44.258784   15360 retry.go:31] will retry after 276.131514ms: waiting for machine to come up
	I0709 16:36:44.536304   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:36:44.536750   15338 main.go:141] libmachine: (addons-470383) DBG | unable to find current IP address of domain addons-470383 in network mk-addons-470383
	I0709 16:36:44.536774   15338 main.go:141] libmachine: (addons-470383) DBG | I0709 16:36:44.536718   15360 retry.go:31] will retry after 251.498314ms: waiting for machine to come up
	I0709 16:36:44.790207   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:36:44.790605   15338 main.go:141] libmachine: (addons-470383) DBG | unable to find current IP address of domain addons-470383 in network mk-addons-470383
	I0709 16:36:44.790634   15338 main.go:141] libmachine: (addons-470383) DBG | I0709 16:36:44.790556   15360 retry.go:31] will retry after 427.187432ms: waiting for machine to come up
	I0709 16:36:45.219166   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:36:45.219521   15338 main.go:141] libmachine: (addons-470383) DBG | unable to find current IP address of domain addons-470383 in network mk-addons-470383
	I0709 16:36:45.219547   15338 main.go:141] libmachine: (addons-470383) DBG | I0709 16:36:45.219478   15360 retry.go:31] will retry after 593.284868ms: waiting for machine to come up
	I0709 16:36:45.814159   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:36:45.814602   15338 main.go:141] libmachine: (addons-470383) DBG | unable to find current IP address of domain addons-470383 in network mk-addons-470383
	I0709 16:36:45.814630   15338 main.go:141] libmachine: (addons-470383) DBG | I0709 16:36:45.814555   15360 retry.go:31] will retry after 636.459021ms: waiting for machine to come up
	I0709 16:36:46.452221   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:36:46.452654   15338 main.go:141] libmachine: (addons-470383) DBG | unable to find current IP address of domain addons-470383 in network mk-addons-470383
	I0709 16:36:46.452684   15338 main.go:141] libmachine: (addons-470383) DBG | I0709 16:36:46.452584   15360 retry.go:31] will retry after 687.106266ms: waiting for machine to come up
	I0709 16:36:47.140890   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:36:47.141242   15338 main.go:141] libmachine: (addons-470383) DBG | unable to find current IP address of domain addons-470383 in network mk-addons-470383
	I0709 16:36:47.141273   15338 main.go:141] libmachine: (addons-470383) DBG | I0709 16:36:47.141206   15360 retry.go:31] will retry after 873.259779ms: waiting for machine to come up
	I0709 16:36:48.016074   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:36:48.016477   15338 main.go:141] libmachine: (addons-470383) DBG | unable to find current IP address of domain addons-470383 in network mk-addons-470383
	I0709 16:36:48.016509   15338 main.go:141] libmachine: (addons-470383) DBG | I0709 16:36:48.016413   15360 retry.go:31] will retry after 1.469525434s: waiting for machine to come up
	I0709 16:36:49.487919   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:36:49.488273   15338 main.go:141] libmachine: (addons-470383) DBG | unable to find current IP address of domain addons-470383 in network mk-addons-470383
	I0709 16:36:49.488301   15338 main.go:141] libmachine: (addons-470383) DBG | I0709 16:36:49.488235   15360 retry.go:31] will retry after 1.223922445s: waiting for machine to come up
	I0709 16:36:50.713498   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:36:50.713935   15338 main.go:141] libmachine: (addons-470383) DBG | unable to find current IP address of domain addons-470383 in network mk-addons-470383
	I0709 16:36:50.713963   15338 main.go:141] libmachine: (addons-470383) DBG | I0709 16:36:50.713889   15360 retry.go:31] will retry after 1.938037448s: waiting for machine to come up
	I0709 16:36:52.653777   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:36:52.654092   15338 main.go:141] libmachine: (addons-470383) DBG | unable to find current IP address of domain addons-470383 in network mk-addons-470383
	I0709 16:36:52.654119   15338 main.go:141] libmachine: (addons-470383) DBG | I0709 16:36:52.654041   15360 retry.go:31] will retry after 2.70120406s: waiting for machine to come up
	I0709 16:36:55.358827   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:36:55.359281   15338 main.go:141] libmachine: (addons-470383) DBG | unable to find current IP address of domain addons-470383 in network mk-addons-470383
	I0709 16:36:55.359309   15338 main.go:141] libmachine: (addons-470383) DBG | I0709 16:36:55.359233   15360 retry.go:31] will retry after 3.209105287s: waiting for machine to come up
	I0709 16:36:58.569460   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:36:58.569922   15338 main.go:141] libmachine: (addons-470383) DBG | unable to find current IP address of domain addons-470383 in network mk-addons-470383
	I0709 16:36:58.569950   15338 main.go:141] libmachine: (addons-470383) DBG | I0709 16:36:58.569876   15360 retry.go:31] will retry after 3.572893814s: waiting for machine to come up
	I0709 16:37:02.144579   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:37:02.145057   15338 main.go:141] libmachine: (addons-470383) Found IP for machine: 192.168.39.216
	I0709 16:37:02.145076   15338 main.go:141] libmachine: (addons-470383) Reserving static IP address...
	I0709 16:37:02.145104   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has current primary IP address 192.168.39.216 and MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:37:02.145420   15338 main.go:141] libmachine: (addons-470383) DBG | unable to find host DHCP lease matching {name: "addons-470383", mac: "52:54:00:21:ff:9e", ip: "192.168.39.216"} in network mk-addons-470383
	I0709 16:37:02.212690   15338 main.go:141] libmachine: (addons-470383) Reserved static IP address: 192.168.39.216
	I0709 16:37:02.212720   15338 main.go:141] libmachine: (addons-470383) DBG | Getting to WaitForSSH function...
	I0709 16:37:02.212740   15338 main.go:141] libmachine: (addons-470383) Waiting for SSH to be available...
	I0709 16:37:02.214965   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:37:02.215268   15338 main.go:141] libmachine: (addons-470383) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:21:ff:9e", ip: ""} in network mk-addons-470383
	I0709 16:37:02.215313   15338 main.go:141] libmachine: (addons-470383) DBG | unable to find defined IP address of network mk-addons-470383 interface with MAC address 52:54:00:21:ff:9e
	I0709 16:37:02.215404   15338 main.go:141] libmachine: (addons-470383) DBG | Using SSH client type: external
	I0709 16:37:02.215436   15338 main.go:141] libmachine: (addons-470383) DBG | Using SSH private key: /home/jenkins/minikube-integration/19199-7540/.minikube/machines/addons-470383/id_rsa (-rw-------)
	I0709 16:37:02.215472   15338 main.go:141] libmachine: (addons-470383) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19199-7540/.minikube/machines/addons-470383/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0709 16:37:02.215489   15338 main.go:141] libmachine: (addons-470383) DBG | About to run SSH command:
	I0709 16:37:02.215510   15338 main.go:141] libmachine: (addons-470383) DBG | exit 0
	I0709 16:37:02.226820   15338 main.go:141] libmachine: (addons-470383) DBG | SSH cmd err, output: exit status 255: 
	I0709 16:37:02.226848   15338 main.go:141] libmachine: (addons-470383) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0709 16:37:02.226859   15338 main.go:141] libmachine: (addons-470383) DBG | command : exit 0
	I0709 16:37:02.226871   15338 main.go:141] libmachine: (addons-470383) DBG | err     : exit status 255
	I0709 16:37:02.226882   15338 main.go:141] libmachine: (addons-470383) DBG | output  : 
	I0709 16:37:05.228977   15338 main.go:141] libmachine: (addons-470383) DBG | Getting to WaitForSSH function...
	I0709 16:37:05.231202   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:37:05.231581   15338 main.go:141] libmachine: (addons-470383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:ff:9e", ip: ""} in network mk-addons-470383: {Iface:virbr1 ExpiryTime:2024-07-09 17:36:56 +0000 UTC Type:0 Mac:52:54:00:21:ff:9e Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-470383 Clientid:01:52:54:00:21:ff:9e}
	I0709 16:37:05.231605   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined IP address 192.168.39.216 and MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:37:05.231688   15338 main.go:141] libmachine: (addons-470383) DBG | Using SSH client type: external
	I0709 16:37:05.231714   15338 main.go:141] libmachine: (addons-470383) DBG | Using SSH private key: /home/jenkins/minikube-integration/19199-7540/.minikube/machines/addons-470383/id_rsa (-rw-------)
	I0709 16:37:05.231776   15338 main.go:141] libmachine: (addons-470383) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.216 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19199-7540/.minikube/machines/addons-470383/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0709 16:37:05.231802   15338 main.go:141] libmachine: (addons-470383) DBG | About to run SSH command:
	I0709 16:37:05.231818   15338 main.go:141] libmachine: (addons-470383) DBG | exit 0
	I0709 16:37:05.356333   15338 main.go:141] libmachine: (addons-470383) DBG | SSH cmd err, output: <nil>: 
	I0709 16:37:05.356603   15338 main.go:141] libmachine: (addons-470383) KVM machine creation complete!
	I0709 16:37:05.356941   15338 main.go:141] libmachine: (addons-470383) Calling .GetConfigRaw
	I0709 16:37:05.357492   15338 main.go:141] libmachine: (addons-470383) Calling .DriverName
	I0709 16:37:05.357674   15338 main.go:141] libmachine: (addons-470383) Calling .DriverName
	I0709 16:37:05.357827   15338 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0709 16:37:05.357837   15338 main.go:141] libmachine: (addons-470383) Calling .GetState
	I0709 16:37:05.359063   15338 main.go:141] libmachine: Detecting operating system of created instance...
	I0709 16:37:05.359078   15338 main.go:141] libmachine: Waiting for SSH to be available...
	I0709 16:37:05.359085   15338 main.go:141] libmachine: Getting to WaitForSSH function...
	I0709 16:37:05.359094   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHHostname
	I0709 16:37:05.361161   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:37:05.361501   15338 main.go:141] libmachine: (addons-470383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:ff:9e", ip: ""} in network mk-addons-470383: {Iface:virbr1 ExpiryTime:2024-07-09 17:36:56 +0000 UTC Type:0 Mac:52:54:00:21:ff:9e Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-470383 Clientid:01:52:54:00:21:ff:9e}
	I0709 16:37:05.361529   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined IP address 192.168.39.216 and MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:37:05.361682   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHPort
	I0709 16:37:05.361841   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHKeyPath
	I0709 16:37:05.361987   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHKeyPath
	I0709 16:37:05.362091   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHUsername
	I0709 16:37:05.362241   15338 main.go:141] libmachine: Using SSH client type: native
	I0709 16:37:05.362446   15338 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.216 22 <nil> <nil>}
	I0709 16:37:05.362457   15338 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0709 16:37:05.463619   15338 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0709 16:37:05.463644   15338 main.go:141] libmachine: Detecting the provisioner...
	I0709 16:37:05.463651   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHHostname
	I0709 16:37:05.466635   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:37:05.467072   15338 main.go:141] libmachine: (addons-470383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:ff:9e", ip: ""} in network mk-addons-470383: {Iface:virbr1 ExpiryTime:2024-07-09 17:36:56 +0000 UTC Type:0 Mac:52:54:00:21:ff:9e Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-470383 Clientid:01:52:54:00:21:ff:9e}
	I0709 16:37:05.467100   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined IP address 192.168.39.216 and MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:37:05.467247   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHPort
	I0709 16:37:05.467485   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHKeyPath
	I0709 16:37:05.467646   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHKeyPath
	I0709 16:37:05.467794   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHUsername
	I0709 16:37:05.467940   15338 main.go:141] libmachine: Using SSH client type: native
	I0709 16:37:05.468187   15338 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.216 22 <nil> <nil>}
	I0709 16:37:05.468202   15338 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0709 16:37:05.569020   15338 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0709 16:37:05.569121   15338 main.go:141] libmachine: found compatible host: buildroot
	I0709 16:37:05.569140   15338 main.go:141] libmachine: Provisioning with buildroot...
	I0709 16:37:05.569149   15338 main.go:141] libmachine: (addons-470383) Calling .GetMachineName
	I0709 16:37:05.569394   15338 buildroot.go:166] provisioning hostname "addons-470383"
	I0709 16:37:05.569424   15338 main.go:141] libmachine: (addons-470383) Calling .GetMachineName
	I0709 16:37:05.569615   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHHostname
	I0709 16:37:05.572210   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:37:05.572521   15338 main.go:141] libmachine: (addons-470383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:ff:9e", ip: ""} in network mk-addons-470383: {Iface:virbr1 ExpiryTime:2024-07-09 17:36:56 +0000 UTC Type:0 Mac:52:54:00:21:ff:9e Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-470383 Clientid:01:52:54:00:21:ff:9e}
	I0709 16:37:05.572548   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined IP address 192.168.39.216 and MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:37:05.572620   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHPort
	I0709 16:37:05.572801   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHKeyPath
	I0709 16:37:05.572948   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHKeyPath
	I0709 16:37:05.573131   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHUsername
	I0709 16:37:05.573314   15338 main.go:141] libmachine: Using SSH client type: native
	I0709 16:37:05.573518   15338 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.216 22 <nil> <nil>}
	I0709 16:37:05.573530   15338 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-470383 && echo "addons-470383" | sudo tee /etc/hostname
	I0709 16:37:05.690383   15338 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-470383
	
	I0709 16:37:05.690407   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHHostname
	I0709 16:37:05.693527   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:37:05.693949   15338 main.go:141] libmachine: (addons-470383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:ff:9e", ip: ""} in network mk-addons-470383: {Iface:virbr1 ExpiryTime:2024-07-09 17:36:56 +0000 UTC Type:0 Mac:52:54:00:21:ff:9e Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-470383 Clientid:01:52:54:00:21:ff:9e}
	I0709 16:37:05.693978   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined IP address 192.168.39.216 and MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:37:05.694199   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHPort
	I0709 16:37:05.694386   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHKeyPath
	I0709 16:37:05.694550   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHKeyPath
	I0709 16:37:05.694697   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHUsername
	I0709 16:37:05.694834   15338 main.go:141] libmachine: Using SSH client type: native
	I0709 16:37:05.695041   15338 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.216 22 <nil> <nil>}
	I0709 16:37:05.695066   15338 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-470383' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-470383/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-470383' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0709 16:37:05.805214   15338 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0709 16:37:05.805247   15338 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19199-7540/.minikube CaCertPath:/home/jenkins/minikube-integration/19199-7540/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19199-7540/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19199-7540/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19199-7540/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19199-7540/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19199-7540/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19199-7540/.minikube}
	I0709 16:37:05.805273   15338 buildroot.go:174] setting up certificates
	I0709 16:37:05.805286   15338 provision.go:84] configureAuth start
	I0709 16:37:05.805301   15338 main.go:141] libmachine: (addons-470383) Calling .GetMachineName
	I0709 16:37:05.805571   15338 main.go:141] libmachine: (addons-470383) Calling .GetIP
	I0709 16:37:05.808073   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:37:05.808384   15338 main.go:141] libmachine: (addons-470383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:ff:9e", ip: ""} in network mk-addons-470383: {Iface:virbr1 ExpiryTime:2024-07-09 17:36:56 +0000 UTC Type:0 Mac:52:54:00:21:ff:9e Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-470383 Clientid:01:52:54:00:21:ff:9e}
	I0709 16:37:05.808407   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined IP address 192.168.39.216 and MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:37:05.808591   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHHostname
	I0709 16:37:05.810390   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:37:05.810637   15338 main.go:141] libmachine: (addons-470383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:ff:9e", ip: ""} in network mk-addons-470383: {Iface:virbr1 ExpiryTime:2024-07-09 17:36:56 +0000 UTC Type:0 Mac:52:54:00:21:ff:9e Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-470383 Clientid:01:52:54:00:21:ff:9e}
	I0709 16:37:05.810663   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined IP address 192.168.39.216 and MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:37:05.810766   15338 provision.go:143] copyHostCerts
	I0709 16:37:05.810831   15338 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19199-7540/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19199-7540/.minikube/key.pem (1675 bytes)
	I0709 16:37:05.810959   15338 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19199-7540/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19199-7540/.minikube/ca.pem (1082 bytes)
	I0709 16:37:05.811027   15338 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19199-7540/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19199-7540/.minikube/cert.pem (1123 bytes)
	I0709 16:37:05.811084   15338 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19199-7540/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19199-7540/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19199-7540/.minikube/certs/ca-key.pem org=jenkins.addons-470383 san=[127.0.0.1 192.168.39.216 addons-470383 localhost minikube]
	I0709 16:37:06.079171   15338 provision.go:177] copyRemoteCerts
	I0709 16:37:06.079230   15338 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0709 16:37:06.079249   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHHostname
	I0709 16:37:06.081648   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:37:06.081948   15338 main.go:141] libmachine: (addons-470383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:ff:9e", ip: ""} in network mk-addons-470383: {Iface:virbr1 ExpiryTime:2024-07-09 17:36:56 +0000 UTC Type:0 Mac:52:54:00:21:ff:9e Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-470383 Clientid:01:52:54:00:21:ff:9e}
	I0709 16:37:06.081972   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined IP address 192.168.39.216 and MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:37:06.082090   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHPort
	I0709 16:37:06.082263   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHKeyPath
	I0709 16:37:06.082391   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHUsername
	I0709 16:37:06.082565   15338 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19199-7540/.minikube/machines/addons-470383/id_rsa Username:docker}
	I0709 16:37:06.162505   15338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19199-7540/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0709 16:37:06.185169   15338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19199-7540/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0709 16:37:06.207105   15338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19199-7540/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0709 16:37:06.229375   15338 provision.go:87] duration metric: took 424.074506ms to configureAuth
	I0709 16:37:06.229398   15338 buildroot.go:189] setting minikube options for container-runtime
	I0709 16:37:06.229549   15338 config.go:182] Loaded profile config "addons-470383": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 16:37:06.229570   15338 main.go:141] libmachine: (addons-470383) Calling .DriverName
	I0709 16:37:06.229866   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHHostname
	I0709 16:37:06.232242   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:37:06.232537   15338 main.go:141] libmachine: (addons-470383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:ff:9e", ip: ""} in network mk-addons-470383: {Iface:virbr1 ExpiryTime:2024-07-09 17:36:56 +0000 UTC Type:0 Mac:52:54:00:21:ff:9e Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-470383 Clientid:01:52:54:00:21:ff:9e}
	I0709 16:37:06.232554   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined IP address 192.168.39.216 and MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:37:06.232703   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHPort
	I0709 16:37:06.232893   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHKeyPath
	I0709 16:37:06.233055   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHKeyPath
	I0709 16:37:06.233198   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHUsername
	I0709 16:37:06.233374   15338 main.go:141] libmachine: Using SSH client type: native
	I0709 16:37:06.233531   15338 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.216 22 <nil> <nil>}
	I0709 16:37:06.233541   15338 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0709 16:37:06.337644   15338 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0709 16:37:06.337675   15338 buildroot.go:70] root file system type: tmpfs
	I0709 16:37:06.337786   15338 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0709 16:37:06.337804   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHHostname
	I0709 16:37:06.340528   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:37:06.340820   15338 main.go:141] libmachine: (addons-470383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:ff:9e", ip: ""} in network mk-addons-470383: {Iface:virbr1 ExpiryTime:2024-07-09 17:36:56 +0000 UTC Type:0 Mac:52:54:00:21:ff:9e Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-470383 Clientid:01:52:54:00:21:ff:9e}
	I0709 16:37:06.340849   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined IP address 192.168.39.216 and MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:37:06.341010   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHPort
	I0709 16:37:06.341208   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHKeyPath
	I0709 16:37:06.341366   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHKeyPath
	I0709 16:37:06.341503   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHUsername
	I0709 16:37:06.341663   15338 main.go:141] libmachine: Using SSH client type: native
	I0709 16:37:06.341860   15338 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.216 22 <nil> <nil>}
	I0709 16:37:06.341956   15338 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0709 16:37:06.457706   15338 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0709 16:37:06.457739   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHHostname
	I0709 16:37:06.460123   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:37:06.460403   15338 main.go:141] libmachine: (addons-470383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:ff:9e", ip: ""} in network mk-addons-470383: {Iface:virbr1 ExpiryTime:2024-07-09 17:36:56 +0000 UTC Type:0 Mac:52:54:00:21:ff:9e Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-470383 Clientid:01:52:54:00:21:ff:9e}
	I0709 16:37:06.460426   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined IP address 192.168.39.216 and MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:37:06.460573   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHPort
	I0709 16:37:06.460750   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHKeyPath
	I0709 16:37:06.460912   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHKeyPath
	I0709 16:37:06.461080   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHUsername
	I0709 16:37:06.461270   15338 main.go:141] libmachine: Using SSH client type: native
	I0709 16:37:06.461433   15338 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.216 22 <nil> <nil>}
	I0709 16:37:06.461450   15338 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0709 16:37:08.425116   15338 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0709 16:37:08.425145   15338 main.go:141] libmachine: Checking connection to Docker...
	I0709 16:37:08.425155   15338 main.go:141] libmachine: (addons-470383) Calling .GetURL
	I0709 16:37:08.426302   15338 main.go:141] libmachine: (addons-470383) DBG | Using libvirt version 6000000
	I0709 16:37:08.428583   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:37:08.428974   15338 main.go:141] libmachine: (addons-470383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:ff:9e", ip: ""} in network mk-addons-470383: {Iface:virbr1 ExpiryTime:2024-07-09 17:36:56 +0000 UTC Type:0 Mac:52:54:00:21:ff:9e Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-470383 Clientid:01:52:54:00:21:ff:9e}
	I0709 16:37:08.429003   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined IP address 192.168.39.216 and MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:37:08.429196   15338 main.go:141] libmachine: Docker is up and running!
	I0709 16:37:08.429214   15338 main.go:141] libmachine: Reticulating splines...
	I0709 16:37:08.429222   15338 client.go:171] duration metric: took 26.31387911s to LocalClient.Create
	I0709 16:37:08.429246   15338 start.go:167] duration metric: took 26.313939359s to libmachine.API.Create "addons-470383"
	I0709 16:37:08.429258   15338 start.go:293] postStartSetup for "addons-470383" (driver="kvm2")
	I0709 16:37:08.429272   15338 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0709 16:37:08.429293   15338 main.go:141] libmachine: (addons-470383) Calling .DriverName
	I0709 16:37:08.429601   15338 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0709 16:37:08.429620   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHHostname
	I0709 16:37:08.431508   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:37:08.431789   15338 main.go:141] libmachine: (addons-470383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:ff:9e", ip: ""} in network mk-addons-470383: {Iface:virbr1 ExpiryTime:2024-07-09 17:36:56 +0000 UTC Type:0 Mac:52:54:00:21:ff:9e Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-470383 Clientid:01:52:54:00:21:ff:9e}
	I0709 16:37:08.431810   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined IP address 192.168.39.216 and MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:37:08.431932   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHPort
	I0709 16:37:08.432098   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHKeyPath
	I0709 16:37:08.432250   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHUsername
	I0709 16:37:08.432376   15338 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19199-7540/.minikube/machines/addons-470383/id_rsa Username:docker}
	I0709 16:37:08.514281   15338 ssh_runner.go:195] Run: cat /etc/os-release
	I0709 16:37:08.518641   15338 info.go:137] Remote host: Buildroot 2023.02.9
	I0709 16:37:08.518689   15338 filesync.go:126] Scanning /home/jenkins/minikube-integration/19199-7540/.minikube/addons for local assets ...
	I0709 16:37:08.518766   15338 filesync.go:126] Scanning /home/jenkins/minikube-integration/19199-7540/.minikube/files for local assets ...
	I0709 16:37:08.518793   15338 start.go:296] duration metric: took 89.526941ms for postStartSetup
	I0709 16:37:08.518835   15338 main.go:141] libmachine: (addons-470383) Calling .GetConfigRaw
	I0709 16:37:08.519391   15338 main.go:141] libmachine: (addons-470383) Calling .GetIP
	I0709 16:37:08.521742   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:37:08.522076   15338 main.go:141] libmachine: (addons-470383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:ff:9e", ip: ""} in network mk-addons-470383: {Iface:virbr1 ExpiryTime:2024-07-09 17:36:56 +0000 UTC Type:0 Mac:52:54:00:21:ff:9e Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-470383 Clientid:01:52:54:00:21:ff:9e}
	I0709 16:37:08.522089   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined IP address 192.168.39.216 and MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:37:08.522308   15338 profile.go:143] Saving config to /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/addons-470383/config.json ...
	I0709 16:37:08.522513   15338 start.go:128] duration metric: took 26.424156052s to createHost
	I0709 16:37:08.522537   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHHostname
	I0709 16:37:08.524855   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:37:08.525165   15338 main.go:141] libmachine: (addons-470383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:ff:9e", ip: ""} in network mk-addons-470383: {Iface:virbr1 ExpiryTime:2024-07-09 17:36:56 +0000 UTC Type:0 Mac:52:54:00:21:ff:9e Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-470383 Clientid:01:52:54:00:21:ff:9e}
	I0709 16:37:08.525189   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined IP address 192.168.39.216 and MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:37:08.525316   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHPort
	I0709 16:37:08.525483   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHKeyPath
	I0709 16:37:08.525639   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHKeyPath
	I0709 16:37:08.525752   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHUsername
	I0709 16:37:08.525885   15338 main.go:141] libmachine: Using SSH client type: native
	I0709 16:37:08.526070   15338 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d980] 0x8306e0 <nil>  [] 0s} 192.168.39.216 22 <nil> <nil>}
	I0709 16:37:08.526081   15338 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0709 16:37:08.628950   15338 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720543028.608991176
	
	I0709 16:37:08.628967   15338 fix.go:216] guest clock: 1720543028.608991176
	I0709 16:37:08.628974   15338 fix.go:229] Guest: 2024-07-09 16:37:08.608991176 +0000 UTC Remote: 2024-07-09 16:37:08.522526533 +0000 UTC m=+26.522701387 (delta=86.464643ms)
	I0709 16:37:08.629016   15338 fix.go:200] guest clock delta is within tolerance: 86.464643ms
	I0709 16:37:08.629028   15338 start.go:83] releasing machines lock for "addons-470383", held for 26.530743104s
	I0709 16:37:08.629050   15338 main.go:141] libmachine: (addons-470383) Calling .DriverName
	I0709 16:37:08.629282   15338 main.go:141] libmachine: (addons-470383) Calling .GetIP
	I0709 16:37:08.631749   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:37:08.632075   15338 main.go:141] libmachine: (addons-470383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:ff:9e", ip: ""} in network mk-addons-470383: {Iface:virbr1 ExpiryTime:2024-07-09 17:36:56 +0000 UTC Type:0 Mac:52:54:00:21:ff:9e Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-470383 Clientid:01:52:54:00:21:ff:9e}
	I0709 16:37:08.632100   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined IP address 192.168.39.216 and MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:37:08.632286   15338 main.go:141] libmachine: (addons-470383) Calling .DriverName
	I0709 16:37:08.632773   15338 main.go:141] libmachine: (addons-470383) Calling .DriverName
	I0709 16:37:08.632945   15338 main.go:141] libmachine: (addons-470383) Calling .DriverName
	I0709 16:37:08.633018   15338 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0709 16:37:08.633062   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHHostname
	I0709 16:37:08.633191   15338 ssh_runner.go:195] Run: cat /version.json
	I0709 16:37:08.633211   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHHostname
	I0709 16:37:08.635684   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:37:08.635955   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:37:08.635986   15338 main.go:141] libmachine: (addons-470383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:ff:9e", ip: ""} in network mk-addons-470383: {Iface:virbr1 ExpiryTime:2024-07-09 17:36:56 +0000 UTC Type:0 Mac:52:54:00:21:ff:9e Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-470383 Clientid:01:52:54:00:21:ff:9e}
	I0709 16:37:08.636009   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined IP address 192.168.39.216 and MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:37:08.636156   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHPort
	I0709 16:37:08.636377   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHKeyPath
	I0709 16:37:08.636408   15338 main.go:141] libmachine: (addons-470383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:ff:9e", ip: ""} in network mk-addons-470383: {Iface:virbr1 ExpiryTime:2024-07-09 17:36:56 +0000 UTC Type:0 Mac:52:54:00:21:ff:9e Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-470383 Clientid:01:52:54:00:21:ff:9e}
	I0709 16:37:08.636435   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined IP address 192.168.39.216 and MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:37:08.636504   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHUsername
	I0709 16:37:08.636573   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHPort
	I0709 16:37:08.636647   15338 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19199-7540/.minikube/machines/addons-470383/id_rsa Username:docker}
	I0709 16:37:08.636723   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHKeyPath
	I0709 16:37:08.636825   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHUsername
	I0709 16:37:08.636934   15338 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19199-7540/.minikube/machines/addons-470383/id_rsa Username:docker}
	I0709 16:37:08.735946   15338 ssh_runner.go:195] Run: systemctl --version
	I0709 16:37:08.741854   15338 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0709 16:37:08.747338   15338 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0709 16:37:08.747398   15338 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0709 16:37:08.762635   15338 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0709 16:37:08.762656   15338 start.go:494] detecting cgroup driver to use...
	I0709 16:37:08.762749   15338 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0709 16:37:08.780723   15338 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0709 16:37:08.790946   15338 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0709 16:37:08.800876   15338 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0709 16:37:08.800928   15338 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0709 16:37:08.810976   15338 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0709 16:37:08.820992   15338 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0709 16:37:08.831028   15338 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0709 16:37:08.840977   15338 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0709 16:37:08.851261   15338 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0709 16:37:08.861692   15338 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0709 16:37:08.871809   15338 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0709 16:37:08.881897   15338 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0709 16:37:08.890821   15338 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0709 16:37:08.899700   15338 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 16:37:09.017303   15338 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0709 16:37:09.040836   15338 start.go:494] detecting cgroup driver to use...
	I0709 16:37:09.040927   15338 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0709 16:37:09.055688   15338 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0709 16:37:09.069087   15338 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0709 16:37:09.088002   15338 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0709 16:37:09.101236   15338 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0709 16:37:09.114106   15338 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0709 16:37:09.142294   15338 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0709 16:37:09.156037   15338 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0709 16:37:09.174119   15338 ssh_runner.go:195] Run: which cri-dockerd
	I0709 16:37:09.177881   15338 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0709 16:37:09.187224   15338 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0709 16:37:09.203497   15338 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0709 16:37:09.315362   15338 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0709 16:37:09.427171   15338 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0709 16:37:09.427312   15338 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0709 16:37:09.444235   15338 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 16:37:09.558374   15338 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0709 16:37:11.917757   15338 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.359346341s)
	I0709 16:37:11.917817   15338 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0709 16:37:11.932571   15338 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0709 16:37:11.946208   15338 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0709 16:37:12.072084   15338 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0709 16:37:12.197911   15338 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 16:37:12.321427   15338 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0709 16:37:12.339638   15338 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0709 16:37:12.352591   15338 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 16:37:12.469482   15338 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0709 16:37:12.546682   15338 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0709 16:37:12.546759   15338 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0709 16:37:12.552382   15338 start.go:562] Will wait 60s for crictl version
	I0709 16:37:12.552449   15338 ssh_runner.go:195] Run: which crictl
	I0709 16:37:12.556292   15338 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0709 16:37:12.596326   15338 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0709 16:37:12.596403   15338 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0709 16:37:12.623078   15338 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0709 16:37:12.645618   15338 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
	I0709 16:37:12.645661   15338 main.go:141] libmachine: (addons-470383) Calling .GetIP
	I0709 16:37:12.648189   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:37:12.648523   15338 main.go:141] libmachine: (addons-470383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:ff:9e", ip: ""} in network mk-addons-470383: {Iface:virbr1 ExpiryTime:2024-07-09 17:36:56 +0000 UTC Type:0 Mac:52:54:00:21:ff:9e Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-470383 Clientid:01:52:54:00:21:ff:9e}
	I0709 16:37:12.648543   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined IP address 192.168.39.216 and MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:37:12.648838   15338 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0709 16:37:12.652935   15338 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0709 16:37:12.666776   15338 kubeadm.go:877] updating cluster {Name:addons-470383 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19199/minikube-v1.33.1-1720433170-19199-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720534588-19199@sha256:b4b7a193d4d5ddc3a5becbbd3489eb6d587f98b5654dfee6a583e3346dfa913d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
2 ClusterName:addons-470383 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.216 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0709 16:37:12.666876   15338 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0709 16:37:12.666918   15338 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0709 16:37:12.683263   15338 docker.go:685] Got preloaded images: 
	I0709 16:37:12.683290   15338 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.2 wasn't preloaded
	I0709 16:37:12.683343   15338 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0709 16:37:12.693060   15338 ssh_runner.go:195] Run: which lz4
	I0709 16:37:12.696693   15338 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0709 16:37:12.700606   15338 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0709 16:37:12.700630   15338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19199-7540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359632088 bytes)
	I0709 16:37:13.920327   15338 docker.go:649] duration metric: took 1.223652444s to copy over tarball
	I0709 16:37:13.920406   15338 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0709 16:37:15.898601   15338 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.978162281s)
	I0709 16:37:15.898628   15338 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0709 16:37:15.933415   15338 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0709 16:37:15.943812   15338 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0709 16:37:15.960429   15338 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 16:37:16.072516   15338 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0709 16:37:19.510699   15338 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.438149023s)
	I0709 16:37:19.510769   15338 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0709 16:37:19.528520   15338 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0709 16:37:19.528538   15338 cache_images.go:84] Images are preloaded, skipping loading
	I0709 16:37:19.528562   15338 kubeadm.go:928] updating node { 192.168.39.216 8443 v1.30.2 docker true true} ...
	I0709 16:37:19.528658   15338 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-470383 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.216
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:addons-470383 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0709 16:37:19.528715   15338 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0709 16:37:19.556180   15338 cni.go:84] Creating CNI manager for ""
	I0709 16:37:19.556203   15338 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0709 16:37:19.556213   15338 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0709 16:37:19.556229   15338 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.216 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-470383 NodeName:addons-470383 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.216"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.216 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0709 16:37:19.556344   15338 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.216
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-470383"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.216
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.216"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0709 16:37:19.556414   15338 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0709 16:37:19.565920   15338 binaries.go:44] Found k8s binaries, skipping transfer
	I0709 16:37:19.565983   15338 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0709 16:37:19.574867   15338 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0709 16:37:19.590873   15338 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0709 16:37:19.606755   15338 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0709 16:37:19.622925   15338 ssh_runner.go:195] Run: grep 192.168.39.216	control-plane.minikube.internal$ /etc/hosts
	I0709 16:37:19.626605   15338 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.216	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0709 16:37:19.638304   15338 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 16:37:19.747232   15338 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0709 16:37:19.770916   15338 certs.go:68] Setting up /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/addons-470383 for IP: 192.168.39.216
	I0709 16:37:19.770942   15338 certs.go:194] generating shared ca certs ...
	I0709 16:37:19.770957   15338 certs.go:226] acquiring lock for ca certs: {Name:mkf1ae1546eaa99c55fffbf1a6cd3b47d34d1a62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 16:37:19.771097   15338 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19199-7540/.minikube/ca.key
	I0709 16:37:20.015494   15338 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19199-7540/.minikube/ca.crt ...
	I0709 16:37:20.015518   15338 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19199-7540/.minikube/ca.crt: {Name:mk03c3e11dcf025172a111dab7016cb1e522aeee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 16:37:20.015701   15338 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19199-7540/.minikube/ca.key ...
	I0709 16:37:20.015715   15338 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19199-7540/.minikube/ca.key: {Name:mk4e1a862a25408a6ece9fd76e83b101d31dbb89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 16:37:20.015812   15338 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19199-7540/.minikube/proxy-client-ca.key
	I0709 16:37:20.211957   15338 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19199-7540/.minikube/proxy-client-ca.crt ...
	I0709 16:37:20.211985   15338 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19199-7540/.minikube/proxy-client-ca.crt: {Name:mk17deee2d8354c084f425dc5520deb667b12efe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 16:37:20.212179   15338 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19199-7540/.minikube/proxy-client-ca.key ...
	I0709 16:37:20.212193   15338 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19199-7540/.minikube/proxy-client-ca.key: {Name:mkcd4b5ef84e023e098fba040770bde24784a60f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 16:37:20.212286   15338 certs.go:256] generating profile certs ...
	I0709 16:37:20.212338   15338 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/addons-470383/client.key
	I0709 16:37:20.212360   15338 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/addons-470383/client.crt with IP's: []
	I0709 16:37:20.386067   15338 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/addons-470383/client.crt ...
	I0709 16:37:20.386095   15338 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/addons-470383/client.crt: {Name:mk81b646b194a3cc4db78886c89966879152c29c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 16:37:20.386298   15338 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/addons-470383/client.key ...
	I0709 16:37:20.386313   15338 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/addons-470383/client.key: {Name:mkf1aca3496809a500298b9c7c1d9e41d6e93e2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 16:37:20.386421   15338 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/addons-470383/apiserver.key.fa7d36a1
	I0709 16:37:20.386441   15338 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/addons-470383/apiserver.crt.fa7d36a1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.216]
	I0709 16:37:20.614694   15338 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/addons-470383/apiserver.crt.fa7d36a1 ...
	I0709 16:37:20.614729   15338 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/addons-470383/apiserver.crt.fa7d36a1: {Name:mk35c36cbe90824bd4d9f4f163264462ce541f2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 16:37:20.614935   15338 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/addons-470383/apiserver.key.fa7d36a1 ...
	I0709 16:37:20.614954   15338 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/addons-470383/apiserver.key.fa7d36a1: {Name:mk1b71c76b27cb42667c55c46bedca34a73cf386 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 16:37:20.615063   15338 certs.go:381] copying /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/addons-470383/apiserver.crt.fa7d36a1 -> /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/addons-470383/apiserver.crt
	I0709 16:37:20.615145   15338 certs.go:385] copying /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/addons-470383/apiserver.key.fa7d36a1 -> /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/addons-470383/apiserver.key
	I0709 16:37:20.615200   15338 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/addons-470383/proxy-client.key
	I0709 16:37:20.615219   15338 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/addons-470383/proxy-client.crt with IP's: []
	I0709 16:37:20.785100   15338 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/addons-470383/proxy-client.crt ...
	I0709 16:37:20.785131   15338 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/addons-470383/proxy-client.crt: {Name:mkc3f2b65d359bbbb4cf61db6a0edc25c0300545 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 16:37:20.785318   15338 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/addons-470383/proxy-client.key ...
	I0709 16:37:20.785333   15338 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/addons-470383/proxy-client.key: {Name:mk43c834065ee0315808600804c69bdacfc78957 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 16:37:20.785567   15338 certs.go:484] found cert: /home/jenkins/minikube-integration/19199-7540/.minikube/certs/ca-key.pem (1675 bytes)
	I0709 16:37:20.785605   15338 certs.go:484] found cert: /home/jenkins/minikube-integration/19199-7540/.minikube/certs/ca.pem (1082 bytes)
	I0709 16:37:20.785631   15338 certs.go:484] found cert: /home/jenkins/minikube-integration/19199-7540/.minikube/certs/cert.pem (1123 bytes)
	I0709 16:37:20.785664   15338 certs.go:484] found cert: /home/jenkins/minikube-integration/19199-7540/.minikube/certs/key.pem (1675 bytes)
	I0709 16:37:20.786223   15338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19199-7540/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0709 16:37:20.811649   15338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19199-7540/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0709 16:37:20.835264   15338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19199-7540/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0709 16:37:20.858944   15338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19199-7540/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0709 16:37:20.883202   15338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/addons-470383/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0709 16:37:20.906568   15338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/addons-470383/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0709 16:37:20.930476   15338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/addons-470383/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0709 16:37:20.953633   15338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/addons-470383/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0709 16:37:20.977146   15338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19199-7540/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0709 16:37:21.000072   15338 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0709 16:37:21.015849   15338 ssh_runner.go:195] Run: openssl version
	I0709 16:37:21.021430   15338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0709 16:37:21.032082   15338 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0709 16:37:21.036417   15338 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  9 16:37 /usr/share/ca-certificates/minikubeCA.pem
	I0709 16:37:21.036486   15338 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0709 16:37:21.042125   15338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0709 16:37:21.052528   15338 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0709 16:37:21.056614   15338 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0709 16:37:21.056656   15338 kubeadm.go:391] StartCluster: {Name:addons-470383 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19199/minikube-v1.33.1-1720433170-19199-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720534588-19199@sha256:b4b7a193d4d5ddc3a5becbbd3489eb6d587f98b5654dfee6a583e3346dfa913d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 C
lusterName:addons-470383 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.216 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0709 16:37:21.056762   15338 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0709 16:37:21.073811   15338 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0709 16:37:21.083300   15338 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0709 16:37:21.092554   15338 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0709 16:37:21.101873   15338 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0709 16:37:21.101890   15338 kubeadm.go:156] found existing configuration files:
	
	I0709 16:37:21.101932   15338 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0709 16:37:21.110708   15338 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0709 16:37:21.110766   15338 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0709 16:37:21.119914   15338 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0709 16:37:21.128689   15338 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0709 16:37:21.128741   15338 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0709 16:37:21.137671   15338 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0709 16:37:21.146366   15338 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0709 16:37:21.146408   15338 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0709 16:37:21.155974   15338 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0709 16:37:21.164734   15338 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0709 16:37:21.164783   15338 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0709 16:37:21.173605   15338 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0709 16:37:21.227842   15338 kubeadm.go:309] [init] Using Kubernetes version: v1.30.2
	I0709 16:37:21.227911   15338 kubeadm.go:309] [preflight] Running pre-flight checks
	I0709 16:37:21.364157   15338 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0709 16:37:21.364282   15338 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0709 16:37:21.364418   15338 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0709 16:37:21.595251   15338 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0709 16:37:21.597781   15338 out.go:204]   - Generating certificates and keys ...
	I0709 16:37:21.597880   15338 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0709 16:37:21.597992   15338 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0709 16:37:21.744321   15338 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0709 16:37:21.808502   15338 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0709 16:37:22.001893   15338 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0709 16:37:22.207019   15338 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0709 16:37:22.435295   15338 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0709 16:37:22.435447   15338 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-470383 localhost] and IPs [192.168.39.216 127.0.0.1 ::1]
	I0709 16:37:22.512253   15338 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0709 16:37:22.512405   15338 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-470383 localhost] and IPs [192.168.39.216 127.0.0.1 ::1]
	I0709 16:37:22.575299   15338 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0709 16:37:22.691374   15338 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0709 16:37:22.858605   15338 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0709 16:37:22.858713   15338 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0709 16:37:23.097426   15338 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0709 16:37:23.152852   15338 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0709 16:37:23.365706   15338 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0709 16:37:23.505561   15338 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0709 16:37:23.611531   15338 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0709 16:37:23.612103   15338 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0709 16:37:23.614522   15338 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0709 16:37:23.616156   15338 out.go:204]   - Booting up control plane ...
	I0709 16:37:23.616266   15338 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0709 16:37:23.616372   15338 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0709 16:37:23.616467   15338 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0709 16:37:23.644946   15338 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0709 16:37:23.647469   15338 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0709 16:37:23.647544   15338 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0709 16:37:23.788952   15338 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0709 16:37:23.789069   15338 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0709 16:37:24.291311   15338 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 502.405062ms
	I0709 16:37:24.291404   15338 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0709 16:37:29.793378   15338 kubeadm.go:309] [api-check] The API server is healthy after 5.502013091s
	I0709 16:37:29.803717   15338 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0709 16:37:29.817394   15338 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0709 16:37:29.838543   15338 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0709 16:37:29.838746   15338 kubeadm.go:309] [mark-control-plane] Marking the node addons-470383 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0709 16:37:29.848779   15338 kubeadm.go:309] [bootstrap-token] Using token: 8dw8s0.9u6iko4ab29vvnav
	I0709 16:37:29.850310   15338 out.go:204]   - Configuring RBAC rules ...
	I0709 16:37:29.850437   15338 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0709 16:37:29.857589   15338 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0709 16:37:29.863420   15338 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0709 16:37:29.866519   15338 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0709 16:37:29.869500   15338 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0709 16:37:29.873602   15338 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0709 16:37:30.199865   15338 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0709 16:37:30.638618   15338 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0709 16:37:31.197379   15338 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0709 16:37:31.198406   15338 kubeadm.go:309] 
	I0709 16:37:31.198498   15338 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0709 16:37:31.198508   15338 kubeadm.go:309] 
	I0709 16:37:31.198600   15338 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0709 16:37:31.198616   15338 kubeadm.go:309] 
	I0709 16:37:31.198657   15338 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0709 16:37:31.198732   15338 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0709 16:37:31.198800   15338 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0709 16:37:31.198808   15338 kubeadm.go:309] 
	I0709 16:37:31.198879   15338 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0709 16:37:31.198888   15338 kubeadm.go:309] 
	I0709 16:37:31.198953   15338 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0709 16:37:31.198980   15338 kubeadm.go:309] 
	I0709 16:37:31.199062   15338 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0709 16:37:31.199169   15338 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0709 16:37:31.199248   15338 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0709 16:37:31.199259   15338 kubeadm.go:309] 
	I0709 16:37:31.199325   15338 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0709 16:37:31.199427   15338 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0709 16:37:31.199445   15338 kubeadm.go:309] 
	I0709 16:37:31.199565   15338 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 8dw8s0.9u6iko4ab29vvnav \
	I0709 16:37:31.199733   15338 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:ffb7650134648a1491df5fc1c162614261c99aa89c47f45aafbe36363bd61f5c \
	I0709 16:37:31.199768   15338 kubeadm.go:309] 	--control-plane 
	I0709 16:37:31.199780   15338 kubeadm.go:309] 
	I0709 16:37:31.199890   15338 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0709 16:37:31.199899   15338 kubeadm.go:309] 
	I0709 16:37:31.199971   15338 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 8dw8s0.9u6iko4ab29vvnav \
	I0709 16:37:31.200057   15338 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:ffb7650134648a1491df5fc1c162614261c99aa89c47f45aafbe36363bd61f5c 
	I0709 16:37:31.201161   15338 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0709 16:37:31.201193   15338 cni.go:84] Creating CNI manager for ""
	I0709 16:37:31.201210   15338 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0709 16:37:31.202919   15338 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0709 16:37:31.204397   15338 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0709 16:37:31.214736   15338 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0709 16:37:31.231484   15338 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0709 16:37:31.231550   15338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 16:37:31.231574   15338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-470383 minikube.k8s.io/updated_at=2024_07_09T16_37_31_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=735571997edb61950a92942d429109b921865fd8 minikube.k8s.io/name=addons-470383 minikube.k8s.io/primary=true
	I0709 16:37:31.246281   15338 ops.go:34] apiserver oom_adj: -16
	I0709 16:37:31.373499   15338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 16:37:31.873549   15338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 16:37:32.374485   15338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 16:37:32.874177   15338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 16:37:33.373704   15338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 16:37:33.874195   15338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 16:37:34.374182   15338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 16:37:34.874535   15338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 16:37:35.374195   15338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 16:37:35.874469   15338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 16:37:36.373668   15338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 16:37:36.873933   15338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 16:37:37.374449   15338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 16:37:37.873681   15338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 16:37:38.374608   15338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 16:37:38.874299   15338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 16:37:39.373518   15338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 16:37:39.873949   15338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 16:37:40.374538   15338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 16:37:40.874148   15338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 16:37:41.373850   15338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 16:37:41.874385   15338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 16:37:42.374164   15338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 16:37:42.873590   15338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 16:37:43.374315   15338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 16:37:43.874377   15338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 16:37:44.373916   15338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 16:37:44.466620   15338 kubeadm.go:1107] duration metric: took 13.235123316s to wait for elevateKubeSystemPrivileges
	W0709 16:37:44.466655   15338 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0709 16:37:44.466666   15338 kubeadm.go:393] duration metric: took 23.41001267s to StartCluster
	I0709 16:37:44.466685   15338 settings.go:142] acquiring lock: {Name:mkb5123b0721152e93860d013088a68e4724b3a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 16:37:44.466835   15338 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19199-7540/kubeconfig
	I0709 16:37:44.467292   15338 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19199-7540/kubeconfig: {Name:mk9faa97dfe13af3c5855575994d9ee50e935eb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 16:37:44.467504   15338 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0709 16:37:44.467568   15338 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.216 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0709 16:37:44.467602   15338 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0709 16:37:44.467703   15338 addons.go:69] Setting cloud-spanner=true in profile "addons-470383"
	I0709 16:37:44.467739   15338 addons.go:69] Setting yakd=true in profile "addons-470383"
	I0709 16:37:44.467758   15338 addons.go:69] Setting inspektor-gadget=true in profile "addons-470383"
	I0709 16:37:44.467768   15338 addons.go:234] Setting addon yakd=true in "addons-470383"
	I0709 16:37:44.467778   15338 addons.go:234] Setting addon inspektor-gadget=true in "addons-470383"
	I0709 16:37:44.467781   15338 config.go:182] Loaded profile config "addons-470383": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 16:37:44.467791   15338 addons.go:69] Setting storage-provisioner=true in profile "addons-470383"
	I0709 16:37:44.467810   15338 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-470383"
	I0709 16:37:44.467821   15338 addons.go:69] Setting ingress=true in profile "addons-470383"
	I0709 16:37:44.467826   15338 addons.go:234] Setting addon storage-provisioner=true in "addons-470383"
	I0709 16:37:44.467827   15338 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-470383"
	I0709 16:37:44.467838   15338 addons.go:69] Setting default-storageclass=true in profile "addons-470383"
	I0709 16:37:44.467848   15338 addons.go:69] Setting volumesnapshots=true in profile "addons-470383"
	I0709 16:37:44.467850   15338 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-470383"
	I0709 16:37:44.467849   15338 addons.go:69] Setting volcano=true in profile "addons-470383"
	I0709 16:37:44.467857   15338 host.go:66] Checking if "addons-470383" exists ...
	I0709 16:37:44.467864   15338 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-470383"
	I0709 16:37:44.467865   15338 addons.go:234] Setting addon volumesnapshots=true in "addons-470383"
	I0709 16:37:44.467874   15338 addons.go:69] Setting metrics-server=true in profile "addons-470383"
	I0709 16:37:44.467875   15338 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-470383"
	I0709 16:37:44.467881   15338 host.go:66] Checking if "addons-470383" exists ...
	I0709 16:37:44.467888   15338 host.go:66] Checking if "addons-470383" exists ...
	I0709 16:37:44.467890   15338 addons.go:234] Setting addon metrics-server=true in "addons-470383"
	I0709 16:37:44.467890   15338 addons.go:234] Setting addon volcano=true in "addons-470383"
	I0709 16:37:44.467908   15338 host.go:66] Checking if "addons-470383" exists ...
	I0709 16:37:44.467917   15338 host.go:66] Checking if "addons-470383" exists ...
	I0709 16:37:44.467820   15338 addons.go:69] Setting gcp-auth=true in profile "addons-470383"
	I0709 16:37:44.468334   15338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0709 16:37:44.468343   15338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0709 16:37:44.467839   15338 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-470383"
	I0709 16:37:44.468354   15338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0709 16:37:44.467804   15338 host.go:66] Checking if "addons-470383" exists ...
	I0709 16:37:44.468373   15338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0709 16:37:44.468380   15338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0709 16:37:44.468395   15338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0709 16:37:44.467800   15338 host.go:66] Checking if "addons-470383" exists ...
	I0709 16:37:44.468413   15338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0709 16:37:44.467745   15338 addons.go:234] Setting addon cloud-spanner=true in "addons-470383"
	I0709 16:37:44.467812   15338 addons.go:69] Setting helm-tiller=true in profile "addons-470383"
	I0709 16:37:44.468478   15338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0709 16:37:44.468483   15338 addons.go:234] Setting addon helm-tiller=true in "addons-470383"
	I0709 16:37:44.468384   15338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0709 16:37:44.468498   15338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0709 16:37:44.467867   15338 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-470383"
	I0709 16:37:44.468348   15338 mustload.go:65] Loading cluster: addons-470383
	I0709 16:37:44.467840   15338 addons.go:234] Setting addon ingress=true in "addons-470383"
	I0709 16:37:44.467820   15338 addons.go:69] Setting ingress-dns=true in profile "addons-470383"
	I0709 16:37:44.468535   15338 addons.go:69] Setting registry=true in profile "addons-470383"
	I0709 16:37:44.468558   15338 addons.go:234] Setting addon ingress-dns=true in "addons-470383"
	I0709 16:37:44.468646   15338 addons.go:234] Setting addon registry=true in "addons-470383"
	I0709 16:37:44.468686   15338 host.go:66] Checking if "addons-470383" exists ...
	I0709 16:37:44.468725   15338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0709 16:37:44.468744   15338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0709 16:37:44.468757   15338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0709 16:37:44.468772   15338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0709 16:37:44.468394   15338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0709 16:37:44.468855   15338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0709 16:37:44.468874   15338 host.go:66] Checking if "addons-470383" exists ...
	I0709 16:37:44.469027   15338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0709 16:37:44.469059   15338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0709 16:37:44.469109   15338 host.go:66] Checking if "addons-470383" exists ...
	I0709 16:37:44.469197   15338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0709 16:37:44.469233   15338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0709 16:37:44.469293   15338 host.go:66] Checking if "addons-470383" exists ...
	I0709 16:37:44.469278   15338 host.go:66] Checking if "addons-470383" exists ...
	I0709 16:37:44.469427   15338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0709 16:37:44.469460   15338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0709 16:37:44.469638   15338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0709 16:37:44.469669   15338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0709 16:37:44.469760   15338 host.go:66] Checking if "addons-470383" exists ...
	I0709 16:37:44.469832   15338 out.go:177] * Verifying Kubernetes components...
	I0709 16:37:44.470032   15338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0709 16:37:44.470051   15338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0709 16:37:44.470111   15338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0709 16:37:44.470125   15338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0709 16:37:44.477406   15338 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 16:37:44.488994   15338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45273
	I0709 16:37:44.490075   15338 main.go:141] libmachine: () Calling .GetVersion
	I0709 16:37:44.490445   15338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45771
	I0709 16:37:44.490557   15338 main.go:141] libmachine: Using API Version  1
	I0709 16:37:44.490574   15338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0709 16:37:44.490947   15338 main.go:141] libmachine: () Calling .GetMachineName
	I0709 16:37:44.490949   15338 main.go:141] libmachine: () Calling .GetVersion
	I0709 16:37:44.490992   15338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37285
	I0709 16:37:44.491480   15338 main.go:141] libmachine: Using API Version  1
	I0709 16:37:44.491497   15338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0709 16:37:44.491562   15338 main.go:141] libmachine: () Calling .GetVersion
	I0709 16:37:44.491571   15338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0709 16:37:44.491589   15338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0709 16:37:44.492301   15338 main.go:141] libmachine: () Calling .GetMachineName
	I0709 16:37:44.492606   15338 main.go:141] libmachine: Using API Version  1
	I0709 16:37:44.492623   15338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0709 16:37:44.492639   15338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41539
	I0709 16:37:44.492922   15338 main.go:141] libmachine: () Calling .GetMachineName
	I0709 16:37:44.504427   15338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40945
	I0709 16:37:44.504547   15338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44189
	I0709 16:37:44.504613   15338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0709 16:37:44.504624   15338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35453
	I0709 16:37:44.504633   15338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0709 16:37:44.504909   15338 config.go:182] Loaded profile config "addons-470383": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 16:37:44.505274   15338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0709 16:37:44.505293   15338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0709 16:37:44.504614   15338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0709 16:37:44.505353   15338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0709 16:37:44.505631   15338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0709 16:37:44.505651   15338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0709 16:37:44.516312   15338 main.go:141] libmachine: () Calling .GetVersion
	I0709 16:37:44.516326   15338 main.go:141] libmachine: () Calling .GetVersion
	I0709 16:37:44.516408   15338 main.go:141] libmachine: () Calling .GetVersion
	I0709 16:37:44.516422   15338 main.go:141] libmachine: () Calling .GetVersion
	I0709 16:37:44.516925   15338 main.go:141] libmachine: Using API Version  1
	I0709 16:37:44.516942   15338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0709 16:37:44.517065   15338 main.go:141] libmachine: Using API Version  1
	I0709 16:37:44.517075   15338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0709 16:37:44.517171   15338 main.go:141] libmachine: Using API Version  1
	I0709 16:37:44.517180   15338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0709 16:37:44.517282   15338 main.go:141] libmachine: Using API Version  1
	I0709 16:37:44.517292   15338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0709 16:37:44.517426   15338 main.go:141] libmachine: () Calling .GetMachineName
	I0709 16:37:44.517761   15338 main.go:141] libmachine: () Calling .GetMachineName
	I0709 16:37:44.517815   15338 main.go:141] libmachine: () Calling .GetMachineName
	I0709 16:37:44.517855   15338 main.go:141] libmachine: () Calling .GetMachineName
	I0709 16:37:44.517895   15338 main.go:141] libmachine: (addons-470383) Calling .GetState
	I0709 16:37:44.518138   15338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0709 16:37:44.518156   15338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0709 16:37:44.518310   15338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0709 16:37:44.518341   15338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0709 16:37:44.518519   15338 main.go:141] libmachine: (addons-470383) Calling .GetState
	I0709 16:37:44.522622   15338 addons.go:234] Setting addon default-storageclass=true in "addons-470383"
	I0709 16:37:44.522664   15338 host.go:66] Checking if "addons-470383" exists ...
	I0709 16:37:44.523017   15338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0709 16:37:44.523049   15338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0709 16:37:44.523650   15338 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-470383"
	I0709 16:37:44.523696   15338 host.go:66] Checking if "addons-470383" exists ...
	I0709 16:37:44.524047   15338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0709 16:37:44.524097   15338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0709 16:37:44.539251   15338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38237
	I0709 16:37:44.539774   15338 main.go:141] libmachine: () Calling .GetVersion
	I0709 16:37:44.540413   15338 main.go:141] libmachine: Using API Version  1
	I0709 16:37:44.540431   15338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0709 16:37:44.541410   15338 main.go:141] libmachine: () Calling .GetMachineName
	I0709 16:37:44.542003   15338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0709 16:37:44.542037   15338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0709 16:37:44.545037   15338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38959
	I0709 16:37:44.545824   15338 main.go:141] libmachine: () Calling .GetVersion
	I0709 16:37:44.546251   15338 main.go:141] libmachine: Using API Version  1
	I0709 16:37:44.546285   15338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0709 16:37:44.546602   15338 main.go:141] libmachine: () Calling .GetMachineName
	I0709 16:37:44.547100   15338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0709 16:37:44.547137   15338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0709 16:37:44.550113   15338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37493
	I0709 16:37:44.550596   15338 main.go:141] libmachine: () Calling .GetVersion
	I0709 16:37:44.551076   15338 main.go:141] libmachine: Using API Version  1
	I0709 16:37:44.551092   15338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0709 16:37:44.551436   15338 main.go:141] libmachine: () Calling .GetMachineName
	I0709 16:37:44.551612   15338 main.go:141] libmachine: (addons-470383) Calling .GetState
	I0709 16:37:44.553268   15338 main.go:141] libmachine: (addons-470383) Calling .DriverName
	I0709 16:37:44.553336   15338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41867
	I0709 16:37:44.555386   15338 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0709 16:37:44.555664   15338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40531
	I0709 16:37:44.556056   15338 main.go:141] libmachine: () Calling .GetVersion
	I0709 16:37:44.556532   15338 main.go:141] libmachine: Using API Version  1
	I0709 16:37:44.556554   15338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0709 16:37:44.556709   15338 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0709 16:37:44.556732   15338 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0709 16:37:44.556751   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHHostname
	I0709 16:37:44.556854   15338 main.go:141] libmachine: () Calling .GetMachineName
	I0709 16:37:44.557368   15338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42193
	I0709 16:37:44.557393   15338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0709 16:37:44.557427   15338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0709 16:37:44.558175   15338 main.go:141] libmachine: () Calling .GetVersion
	I0709 16:37:44.561581   15338 main.go:141] libmachine: () Calling .GetVersion
	I0709 16:37:44.561668   15338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43041
	I0709 16:37:44.561774   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHPort
	I0709 16:37:44.561901   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:37:44.561926   15338 main.go:141] libmachine: (addons-470383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:ff:9e", ip: ""} in network mk-addons-470383: {Iface:virbr1 ExpiryTime:2024-07-09 17:36:56 +0000 UTC Type:0 Mac:52:54:00:21:ff:9e Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-470383 Clientid:01:52:54:00:21:ff:9e}
	I0709 16:37:44.561949   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined IP address 192.168.39.216 and MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:37:44.561961   15338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36713
	I0709 16:37:44.562041   15338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39129
	I0709 16:37:44.562259   15338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34831
	I0709 16:37:44.562514   15338 main.go:141] libmachine: Using API Version  1
	I0709 16:37:44.562527   15338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0709 16:37:44.562855   15338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36173
	I0709 16:37:44.562954   15338 main.go:141] libmachine: () Calling .GetVersion
	I0709 16:37:44.563500   15338 main.go:141] libmachine: Using API Version  1
	I0709 16:37:44.563514   15338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0709 16:37:44.563882   15338 main.go:141] libmachine: () Calling .GetMachineName
	I0709 16:37:44.564320   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHKeyPath
	I0709 16:37:44.564576   15338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0709 16:37:44.564804   15338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0709 16:37:44.565305   15338 main.go:141] libmachine: () Calling .GetVersion
	I0709 16:37:44.565384   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHUsername
	I0709 16:37:44.565450   15338 main.go:141] libmachine: () Calling .GetMachineName
	I0709 16:37:44.565510   15338 main.go:141] libmachine: Using API Version  1
	I0709 16:37:44.565523   15338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0709 16:37:44.565601   15338 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19199-7540/.minikube/machines/addons-470383/id_rsa Username:docker}
	I0709 16:37:44.565785   15338 main.go:141] libmachine: () Calling .GetVersion
	I0709 16:37:44.566208   15338 main.go:141] libmachine: (addons-470383) Calling .GetState
	I0709 16:37:44.566396   15338 main.go:141] libmachine: Using API Version  1
	I0709 16:37:44.566415   15338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0709 16:37:44.566485   15338 main.go:141] libmachine: () Calling .GetVersion
	I0709 16:37:44.566536   15338 main.go:141] libmachine: Using API Version  1
	I0709 16:37:44.566551   15338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0709 16:37:44.566611   15338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38085
	I0709 16:37:44.567252   15338 main.go:141] libmachine: () Calling .GetMachineName
	I0709 16:37:44.567263   15338 main.go:141] libmachine: () Calling .GetMachineName
	I0709 16:37:44.567252   15338 main.go:141] libmachine: () Calling .GetMachineName
	I0709 16:37:44.567892   15338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0709 16:37:44.567925   15338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0709 16:37:44.567931   15338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0709 16:37:44.567926   15338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0709 16:37:44.567961   15338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0709 16:37:44.567967   15338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0709 16:37:44.568361   15338 main.go:141] libmachine: Using API Version  1
	I0709 16:37:44.568379   15338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0709 16:37:44.568386   15338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37201
	I0709 16:37:44.568652   15338 main.go:141] libmachine: () Calling .GetVersion
	I0709 16:37:44.568883   15338 main.go:141] libmachine: () Calling .GetVersion
	I0709 16:37:44.569143   15338 main.go:141] libmachine: Using API Version  1
	I0709 16:37:44.569158   15338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0709 16:37:44.569472   15338 main.go:141] libmachine: () Calling .GetMachineName
	I0709 16:37:44.569544   15338 main.go:141] libmachine: () Calling .GetVersion
	I0709 16:37:44.570022   15338 main.go:141] libmachine: Using API Version  1
	I0709 16:37:44.570046   15338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0709 16:37:44.570161   15338 main.go:141] libmachine: (addons-470383) Calling .DriverName
	I0709 16:37:44.570198   15338 main.go:141] libmachine: Using API Version  1
	I0709 16:37:44.570209   15338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0709 16:37:44.570399   15338 main.go:141] libmachine: (addons-470383) Calling .GetState
	I0709 16:37:44.570988   15338 main.go:141] libmachine: () Calling .GetMachineName
	I0709 16:37:44.570963   15338 main.go:141] libmachine: () Calling .GetMachineName
	I0709 16:37:44.571669   15338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0709 16:37:44.571707   15338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0709 16:37:44.572011   15338 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0709 16:37:44.572264   15338 host.go:66] Checking if "addons-470383" exists ...
	I0709 16:37:44.572566   15338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0709 16:37:44.572594   15338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0709 16:37:44.572708   15338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0709 16:37:44.572758   15338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0709 16:37:44.573391   15338 main.go:141] libmachine: () Calling .GetMachineName
	I0709 16:37:44.573592   15338 main.go:141] libmachine: (addons-470383) Calling .GetState
	I0709 16:37:44.574853   15338 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0709 16:37:44.575141   15338 main.go:141] libmachine: (addons-470383) Calling .DriverName
	I0709 16:37:44.608666   15338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44155
	I0709 16:37:44.608701   15338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36369
	I0709 16:37:44.608748   15338 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0709 16:37:44.608800   15338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44741
	I0709 16:37:44.608847   15338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34185
	I0709 16:37:44.608897   15338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43559
	I0709 16:37:44.608933   15338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39997
	I0709 16:37:44.609221   15338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44627
	I0709 16:37:44.609233   15338 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0709 16:37:44.609319   15338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33219
	I0709 16:37:44.609327   15338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43455
	I0709 16:37:44.609393   15338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46289
	I0709 16:37:44.609394   15338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32915
	I0709 16:37:44.609512   15338 main.go:141] libmachine: () Calling .GetVersion
	I0709 16:37:44.610227   15338 main.go:141] libmachine: Using API Version  1
	I0709 16:37:44.610244   15338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0709 16:37:44.610312   15338 main.go:141] libmachine: () Calling .GetVersion
	I0709 16:37:44.610323   15338 main.go:141] libmachine: () Calling .GetVersion
	I0709 16:37:44.610388   15338 main.go:141] libmachine: () Calling .GetVersion
	I0709 16:37:44.610421   15338 main.go:141] libmachine: () Calling .GetVersion
	I0709 16:37:44.610435   15338 main.go:141] libmachine: () Calling .GetVersion
	I0709 16:37:44.610479   15338 main.go:141] libmachine: () Calling .GetVersion
	I0709 16:37:44.610489   15338 main.go:141] libmachine: () Calling .GetVersion
	I0709 16:37:44.610577   15338 main.go:141] libmachine: () Calling .GetMachineName
	I0709 16:37:44.611038   15338 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0709 16:37:44.611042   15338 main.go:141] libmachine: Using API Version  1
	I0709 16:37:44.611055   15338 main.go:141] libmachine: Using API Version  1
	I0709 16:37:44.611057   15338 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0709 16:37:44.611071   15338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0709 16:37:44.611082   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHHostname
	I0709 16:37:44.611087   15338 main.go:141] libmachine: () Calling .GetVersion
	I0709 16:37:44.611055   15338 main.go:141] libmachine: Using API Version  1
	I0709 16:37:44.611132   15338 main.go:141] libmachine: () Calling .GetVersion
	I0709 16:37:44.611147   15338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0709 16:37:44.611151   15338 main.go:141] libmachine: Using API Version  1
	I0709 16:37:44.611165   15338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0709 16:37:44.611062   15338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0709 16:37:44.611375   15338 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0709 16:37:44.611395   15338 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0709 16:37:44.611412   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHHostname
	I0709 16:37:44.611441   15338 main.go:141] libmachine: Using API Version  1
	I0709 16:37:44.611453   15338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0709 16:37:44.611479   15338 main.go:141] libmachine: () Calling .GetMachineName
	I0709 16:37:44.611516   15338 main.go:141] libmachine: () Calling .GetMachineName
	I0709 16:37:44.611585   15338 main.go:141] libmachine: () Calling .GetVersion
	I0709 16:37:44.611599   15338 main.go:141] libmachine: Using API Version  1
	I0709 16:37:44.611611   15338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0709 16:37:44.611818   15338 main.go:141] libmachine: () Calling .GetMachineName
	I0709 16:37:44.611910   15338 main.go:141] libmachine: Using API Version  1
	I0709 16:37:44.611918   15338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0709 16:37:44.611956   15338 main.go:141] libmachine: (addons-470383) Calling .GetState
	I0709 16:37:44.612166   15338 main.go:141] libmachine: (addons-470383) Calling .GetState
	I0709 16:37:44.612220   15338 main.go:141] libmachine: () Calling .GetMachineName
	I0709 16:37:44.612257   15338 main.go:141] libmachine: (addons-470383) Calling .GetState
	I0709 16:37:44.612279   15338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0709 16:37:44.612299   15338 main.go:141] libmachine: () Calling .GetMachineName
	I0709 16:37:44.612319   15338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0709 16:37:44.612969   15338 main.go:141] libmachine: (addons-470383) Calling .GetState
	I0709 16:37:44.612969   15338 main.go:141] libmachine: (addons-470383) Calling .GetState
	I0709 16:37:44.613088   15338 main.go:141] libmachine: Using API Version  1
	I0709 16:37:44.613100   15338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0709 16:37:44.613109   15338 main.go:141] libmachine: Using API Version  1
	I0709 16:37:44.613119   15338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0709 16:37:44.613153   15338 main.go:141] libmachine: () Calling .GetMachineName
	I0709 16:37:44.613219   15338 main.go:141] libmachine: Using API Version  1
	I0709 16:37:44.613229   15338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0709 16:37:44.614058   15338 main.go:141] libmachine: () Calling .GetMachineName
	I0709 16:37:44.614070   15338 main.go:141] libmachine: () Calling .GetMachineName
	I0709 16:37:44.614105   15338 main.go:141] libmachine: (addons-470383) Calling .GetState
	I0709 16:37:44.614139   15338 main.go:141] libmachine: () Calling .GetMachineName
	I0709 16:37:44.614892   15338 main.go:141] libmachine: () Calling .GetMachineName
	I0709 16:37:44.614920   15338 main.go:141] libmachine: (addons-470383) Calling .GetState
	I0709 16:37:44.614927   15338 main.go:141] libmachine: (addons-470383) Calling .GetState
	I0709 16:37:44.614950   15338 main.go:141] libmachine: (addons-470383) Calling .GetState
	I0709 16:37:44.615526   15338 main.go:141] libmachine: (addons-470383) Calling .GetState
	I0709 16:37:44.615709   15338 main.go:141] libmachine: (addons-470383) Calling .DriverName
	I0709 16:37:44.617221   15338 main.go:141] libmachine: (addons-470383) Calling .DriverName
	I0709 16:37:44.617427   15338 main.go:141] libmachine: (addons-470383) Calling .DriverName
	I0709 16:37:44.617822   15338 main.go:141] libmachine: (addons-470383) Calling .DriverName
	I0709 16:37:44.618195   15338 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0709 16:37:44.618250   15338 main.go:141] libmachine: (addons-470383) Calling .DriverName
	I0709 16:37:44.618370   15338 main.go:141] libmachine: (addons-470383) Calling .DriverName
	I0709 16:37:44.618503   15338 main.go:141] libmachine: (addons-470383) Calling .DriverName
	I0709 16:37:44.618971   15338 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0709 16:37:44.618998   15338 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0709 16:37:44.619774   15338 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0709 16:37:44.620029   15338 main.go:141] libmachine: (addons-470383) Calling .DriverName
	I0709 16:37:44.620033   15338 main.go:141] libmachine: (addons-470383) Calling .DriverName
	I0709 16:37:44.620485   15338 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0709 16:37:44.620497   15338 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0709 16:37:44.620762   15338 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0709 16:37:44.620782   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHHostname
	I0709 16:37:44.620516   15338 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0709 16:37:44.620824   15338 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0709 16:37:44.620834   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHHostname
	I0709 16:37:44.621284   15338 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.1
	I0709 16:37:44.621889   15338 out.go:177]   - Using image docker.io/registry:2.8.3
	I0709 16:37:44.621939   15338 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0709 16:37:44.621959   15338 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0709 16:37:44.621980   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHHostname
	I0709 16:37:44.622035   15338 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.1
	I0709 16:37:44.622269   15338 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0709 16:37:44.623299   15338 main.go:141] libmachine: (addons-470383) Calling .DriverName
	I0709 16:37:44.623457   15338 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0709 16:37:44.623504   15338 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0709 16:37:44.623707   15338 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0709 16:37:44.623730   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHHostname
	I0709 16:37:44.624021   15338 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0709 16:37:44.624033   15338 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0709 16:37:44.624082   15338 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0709 16:37:44.625202   15338 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0709 16:37:44.625220   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHHostname
	I0709 16:37:44.624959   15338 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0709 16:37:44.625542   15338 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0709 16:37:44.626180   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:37:44.626206   15338 main.go:141] libmachine: (addons-470383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:ff:9e", ip: ""} in network mk-addons-470383: {Iface:virbr1 ExpiryTime:2024-07-09 17:36:56 +0000 UTC Type:0 Mac:52:54:00:21:ff:9e Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-470383 Clientid:01:52:54:00:21:ff:9e}
	I0709 16:37:44.626225   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined IP address 192.168.39.216 and MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:37:44.626257   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHPort
	I0709 16:37:44.625636   15338 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0709 16:37:44.626310   15338 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0709 16:37:44.626324   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHHostname
	I0709 16:37:44.625962   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:37:44.625983   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:37:44.626517   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHKeyPath
	I0709 16:37:44.626976   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHPort
	I0709 16:37:44.627035   15338 main.go:141] libmachine: (addons-470383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:ff:9e", ip: ""} in network mk-addons-470383: {Iface:virbr1 ExpiryTime:2024-07-09 17:36:56 +0000 UTC Type:0 Mac:52:54:00:21:ff:9e Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-470383 Clientid:01:52:54:00:21:ff:9e}
	I0709 16:37:44.627049   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined IP address 192.168.39.216 and MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:37:44.627052   15338 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0709 16:37:44.627063   15338 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0709 16:37:44.627074   15338 main.go:141] libmachine: (addons-470383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:ff:9e", ip: ""} in network mk-addons-470383: {Iface:virbr1 ExpiryTime:2024-07-09 17:36:56 +0000 UTC Type:0 Mac:52:54:00:21:ff:9e Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-470383 Clientid:01:52:54:00:21:ff:9e}
	I0709 16:37:44.627079   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHHostname
	I0709 16:37:44.627086   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined IP address 192.168.39.216 and MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:37:44.627107   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHUsername
	I0709 16:37:44.627606   15338 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0709 16:37:44.627623   15338 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0709 16:37:44.627964   15338 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19199-7540/.minikube/machines/addons-470383/id_rsa Username:docker}
	I0709 16:37:44.627734   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHHostname
	I0709 16:37:44.628117   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHKeyPath
	I0709 16:37:44.628155   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:37:44.628215   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHPort
	I0709 16:37:44.628252   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:37:44.628384   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHKeyPath
	I0709 16:37:44.628398   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHUsername
	I0709 16:37:44.628424   15338 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0709 16:37:44.628495   15338 main.go:141] libmachine: (addons-470383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:ff:9e", ip: ""} in network mk-addons-470383: {Iface:virbr1 ExpiryTime:2024-07-09 17:36:56 +0000 UTC Type:0 Mac:52:54:00:21:ff:9e Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-470383 Clientid:01:52:54:00:21:ff:9e}
	I0709 16:37:44.628850   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHUsername
	I0709 16:37:44.628909   15338 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19199-7540/.minikube/machines/addons-470383/id_rsa Username:docker}
	I0709 16:37:44.628728   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined IP address 192.168.39.216 and MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:37:44.628582   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHPort
	I0709 16:37:44.629287   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHKeyPath
	I0709 16:37:44.629878   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHUsername
	I0709 16:37:44.629887   15338 main.go:141] libmachine: (addons-470383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:ff:9e", ip: ""} in network mk-addons-470383: {Iface:virbr1 ExpiryTime:2024-07-09 17:36:56 +0000 UTC Type:0 Mac:52:54:00:21:ff:9e Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-470383 Clientid:01:52:54:00:21:ff:9e}
	I0709 16:37:44.629904   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined IP address 192.168.39.216 and MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:37:44.630055   15338 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19199-7540/.minikube/machines/addons-470383/id_rsa Username:docker}
	I0709 16:37:44.630188   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:37:44.630223   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHPort
	I0709 16:37:44.630472   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHKeyPath
	I0709 16:37:44.630554   15338 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0709 16:37:44.630643   15338 main.go:141] libmachine: (addons-470383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:ff:9e", ip: ""} in network mk-addons-470383: {Iface:virbr1 ExpiryTime:2024-07-09 17:36:56 +0000 UTC Type:0 Mac:52:54:00:21:ff:9e Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-470383 Clientid:01:52:54:00:21:ff:9e}
	I0709 16:37:44.630658   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined IP address 192.168.39.216 and MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:37:44.630691   15338 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0709 16:37:44.630698   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHUsername
	I0709 16:37:44.630689   15338 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19199-7540/.minikube/machines/addons-470383/id_rsa Username:docker}
	I0709 16:37:44.630732   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHHostname
	I0709 16:37:44.630922   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHPort
	I0709 16:37:44.630968   15338 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19199-7540/.minikube/machines/addons-470383/id_rsa Username:docker}
	I0709 16:37:44.631371   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHKeyPath
	I0709 16:37:44.631533   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHUsername
	I0709 16:37:44.631665   15338 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19199-7540/.minikube/machines/addons-470383/id_rsa Username:docker}
	I0709 16:37:44.631899   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:37:44.632245   15338 main.go:141] libmachine: (addons-470383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:ff:9e", ip: ""} in network mk-addons-470383: {Iface:virbr1 ExpiryTime:2024-07-09 17:36:56 +0000 UTC Type:0 Mac:52:54:00:21:ff:9e Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-470383 Clientid:01:52:54:00:21:ff:9e}
	I0709 16:37:44.632268   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined IP address 192.168.39.216 and MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:37:44.632435   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHPort
	I0709 16:37:44.632611   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHKeyPath
	I0709 16:37:44.632769   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHUsername
	I0709 16:37:44.632938   15338 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19199-7540/.minikube/machines/addons-470383/id_rsa Username:docker}
	I0709 16:37:44.633414   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:37:44.634068   15338 main.go:141] libmachine: (addons-470383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:ff:9e", ip: ""} in network mk-addons-470383: {Iface:virbr1 ExpiryTime:2024-07-09 17:36:56 +0000 UTC Type:0 Mac:52:54:00:21:ff:9e Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-470383 Clientid:01:52:54:00:21:ff:9e}
	I0709 16:37:44.634091   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined IP address 192.168.39.216 and MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:37:44.634208   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHPort
	I0709 16:37:44.634362   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHKeyPath
	I0709 16:37:44.634512   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHUsername
	I0709 16:37:44.634633   15338 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19199-7540/.minikube/machines/addons-470383/id_rsa Username:docker}
	I0709 16:37:44.635993   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:37:44.636268   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:37:44.636312   15338 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0709 16:37:44.636717   15338 main.go:141] libmachine: (addons-470383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:ff:9e", ip: ""} in network mk-addons-470383: {Iface:virbr1 ExpiryTime:2024-07-09 17:36:56 +0000 UTC Type:0 Mac:52:54:00:21:ff:9e Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-470383 Clientid:01:52:54:00:21:ff:9e}
	I0709 16:37:44.636742   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined IP address 192.168.39.216 and MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:37:44.637010   15338 main.go:141] libmachine: (addons-470383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:ff:9e", ip: ""} in network mk-addons-470383: {Iface:virbr1 ExpiryTime:2024-07-09 17:36:56 +0000 UTC Type:0 Mac:52:54:00:21:ff:9e Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-470383 Clientid:01:52:54:00:21:ff:9e}
	I0709 16:37:44.637031   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined IP address 192.168.39.216 and MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:37:44.637268   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHPort
	I0709 16:37:44.637327   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHPort
	I0709 16:37:44.637462   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHKeyPath
	I0709 16:37:44.637505   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHKeyPath
	I0709 16:37:44.637535   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:37:44.637631   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHUsername
	I0709 16:37:44.637616   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHUsername
	I0709 16:37:44.637778   15338 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19199-7540/.minikube/machines/addons-470383/id_rsa Username:docker}
	I0709 16:37:44.637986   15338 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19199-7540/.minikube/machines/addons-470383/id_rsa Username:docker}
	I0709 16:37:44.638217   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHPort
	I0709 16:37:44.638224   15338 main.go:141] libmachine: (addons-470383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:ff:9e", ip: ""} in network mk-addons-470383: {Iface:virbr1 ExpiryTime:2024-07-09 17:36:56 +0000 UTC Type:0 Mac:52:54:00:21:ff:9e Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-470383 Clientid:01:52:54:00:21:ff:9e}
	I0709 16:37:44.638240   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined IP address 192.168.39.216 and MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:37:44.638383   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHKeyPath
	I0709 16:37:44.638548   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHUsername
	I0709 16:37:44.638682   15338 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19199-7540/.minikube/machines/addons-470383/id_rsa Username:docker}
	I0709 16:37:44.638974   15338 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0709 16:37:44.640492   15338 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0709 16:37:44.641704   15338 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0709 16:37:44.642347   15338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37357
	I0709 16:37:44.643146   15338 main.go:141] libmachine: () Calling .GetVersion
	I0709 16:37:44.643653   15338 main.go:141] libmachine: Using API Version  1
	I0709 16:37:44.643668   15338 main.go:141] libmachine: () Calling .SetConfigRaw
	W0709 16:37:44.643852   15338 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:44136->192.168.39.216:22: read: connection reset by peer
	I0709 16:37:44.643877   15338 retry.go:31] will retry after 350.7198ms: ssh: handshake failed: read tcp 192.168.39.1:44136->192.168.39.216:22: read: connection reset by peer
	I0709 16:37:44.643940   15338 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0709 16:37:44.643972   15338 main.go:141] libmachine: () Calling .GetMachineName
	I0709 16:37:44.644055   15338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32797
	I0709 16:37:44.644109   15338 main.go:141] libmachine: (addons-470383) Calling .GetState
	I0709 16:37:44.644428   15338 main.go:141] libmachine: () Calling .GetVersion
	I0709 16:37:44.644902   15338 main.go:141] libmachine: Using API Version  1
	I0709 16:37:44.644923   15338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0709 16:37:44.645143   15338 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0709 16:37:44.645201   15338 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0709 16:37:44.645208   15338 main.go:141] libmachine: () Calling .GetMachineName
	I0709 16:37:44.645249   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHHostname
	I0709 16:37:44.645379   15338 main.go:141] libmachine: (addons-470383) Calling .DriverName
	I0709 16:37:44.645800   15338 main.go:141] libmachine: (addons-470383) Calling .DriverName
	I0709 16:37:44.647265   15338 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0709 16:37:44.648378   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:37:44.648837   15338 main.go:141] libmachine: (addons-470383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:ff:9e", ip: ""} in network mk-addons-470383: {Iface:virbr1 ExpiryTime:2024-07-09 17:36:56 +0000 UTC Type:0 Mac:52:54:00:21:ff:9e Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-470383 Clientid:01:52:54:00:21:ff:9e}
	I0709 16:37:44.648856   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined IP address 192.168.39.216 and MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:37:44.649006   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHPort
	I0709 16:37:44.649136   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHKeyPath
	I0709 16:37:44.649328   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHUsername
	I0709 16:37:44.649452   15338 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19199-7540/.minikube/machines/addons-470383/id_rsa Username:docker}
	I0709 16:37:44.649648   15338 out.go:177]   - Using image docker.io/busybox:stable
	I0709 16:37:44.650034   15338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33329
	I0709 16:37:44.650332   15338 main.go:141] libmachine: () Calling .GetVersion
	I0709 16:37:44.650728   15338 main.go:141] libmachine: Using API Version  1
	I0709 16:37:44.650747   15338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0709 16:37:44.650976   15338 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0709 16:37:44.650989   15338 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0709 16:37:44.651001   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHHostname
	I0709 16:37:44.651004   15338 main.go:141] libmachine: () Calling .GetMachineName
	I0709 16:37:44.651227   15338 main.go:141] libmachine: (addons-470383) Calling .GetState
	I0709 16:37:44.652931   15338 main.go:141] libmachine: (addons-470383) Calling .DriverName
	I0709 16:37:44.653188   15338 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0709 16:37:44.653203   15338 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0709 16:37:44.653217   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHHostname
	I0709 16:37:44.654453   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:37:44.655050   15338 main.go:141] libmachine: (addons-470383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:ff:9e", ip: ""} in network mk-addons-470383: {Iface:virbr1 ExpiryTime:2024-07-09 17:36:56 +0000 UTC Type:0 Mac:52:54:00:21:ff:9e Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-470383 Clientid:01:52:54:00:21:ff:9e}
	I0709 16:37:44.655069   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined IP address 192.168.39.216 and MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:37:44.655281   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHPort
	I0709 16:37:44.655434   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHKeyPath
	I0709 16:37:44.655573   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHUsername
	I0709 16:37:44.655647   15338 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19199-7540/.minikube/machines/addons-470383/id_rsa Username:docker}
	I0709 16:37:44.656023   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:37:44.656430   15338 main.go:141] libmachine: (addons-470383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:ff:9e", ip: ""} in network mk-addons-470383: {Iface:virbr1 ExpiryTime:2024-07-09 17:36:56 +0000 UTC Type:0 Mac:52:54:00:21:ff:9e Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-470383 Clientid:01:52:54:00:21:ff:9e}
	I0709 16:37:44.656444   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined IP address 192.168.39.216 and MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:37:44.656593   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHPort
	I0709 16:37:44.656718   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHKeyPath
	I0709 16:37:44.656850   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHUsername
	I0709 16:37:44.656959   15338 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19199-7540/.minikube/machines/addons-470383/id_rsa Username:docker}
	I0709 16:37:44.922054   15338 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0709 16:37:44.922384   15338 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0709 16:37:44.975591   15338 node_ready.go:35] waiting up to 6m0s for node "addons-470383" to be "Ready" ...
	I0709 16:37:44.978105   15338 node_ready.go:49] node "addons-470383" has status "Ready":"True"
	I0709 16:37:44.978121   15338 node_ready.go:38] duration metric: took 2.505795ms for node "addons-470383" to be "Ready" ...
	I0709 16:37:44.978128   15338 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0709 16:37:44.984843   15338 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-470383" in "kube-system" namespace to be "Ready" ...
	I0709 16:37:44.989501   15338 pod_ready.go:92] pod "etcd-addons-470383" in "kube-system" namespace has status "Ready":"True"
	I0709 16:37:44.989519   15338 pod_ready.go:81] duration metric: took 4.645296ms for pod "etcd-addons-470383" in "kube-system" namespace to be "Ready" ...
	I0709 16:37:44.989527   15338 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-470383" in "kube-system" namespace to be "Ready" ...
	I0709 16:37:44.993891   15338 pod_ready.go:92] pod "kube-apiserver-addons-470383" in "kube-system" namespace has status "Ready":"True"
	I0709 16:37:44.993908   15338 pod_ready.go:81] duration metric: took 4.373992ms for pod "kube-apiserver-addons-470383" in "kube-system" namespace to be "Ready" ...
	I0709 16:37:44.993917   15338 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-470383" in "kube-system" namespace to be "Ready" ...
	I0709 16:37:45.002739   15338 pod_ready.go:92] pod "kube-controller-manager-addons-470383" in "kube-system" namespace has status "Ready":"True"
	I0709 16:37:45.002757   15338 pod_ready.go:81] duration metric: took 8.832163ms for pod "kube-controller-manager-addons-470383" in "kube-system" namespace to be "Ready" ...
	I0709 16:37:45.002764   15338 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-59z6c" in "kube-system" namespace to be "Ready" ...
	I0709 16:37:45.034469   15338 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0709 16:37:45.034500   15338 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0709 16:37:45.034760   15338 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0709 16:37:45.034778   15338 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0709 16:37:45.065335   15338 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0709 16:37:45.139180   15338 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0709 16:37:45.194141   15338 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0709 16:37:45.194164   15338 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0709 16:37:45.195102   15338 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0709 16:37:45.199792   15338 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0709 16:37:45.226500   15338 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0709 16:37:45.226522   15338 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0709 16:37:45.239995   15338 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0709 16:37:45.240167   15338 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0709 16:37:45.242131   15338 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0709 16:37:45.243482   15338 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0709 16:37:45.249781   15338 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0709 16:37:45.249797   15338 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0709 16:37:45.275662   15338 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0709 16:37:45.275685   15338 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0709 16:37:45.277193   15338 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0709 16:37:45.277211   15338 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0709 16:37:45.280536   15338 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0709 16:37:45.280553   15338 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0709 16:37:45.530955   15338 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0709 16:37:45.530979   15338 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0709 16:37:45.550604   15338 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0709 16:37:45.550623   15338 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0709 16:37:45.568681   15338 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0709 16:37:45.568702   15338 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0709 16:37:45.605782   15338 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0709 16:37:45.605803   15338 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0709 16:37:45.617908   15338 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0709 16:37:45.617932   15338 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0709 16:37:45.623600   15338 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0709 16:37:45.623615   15338 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0709 16:37:45.873461   15338 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0709 16:37:45.873496   15338 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0709 16:37:45.910056   15338 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0709 16:37:45.934198   15338 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0709 16:37:45.934225   15338 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0709 16:37:45.988764   15338 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0709 16:37:45.988790   15338 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0709 16:37:46.023558   15338 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0709 16:37:46.023582   15338 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0709 16:37:46.290460   15338 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0709 16:37:46.290483   15338 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0709 16:37:46.300413   15338 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0709 16:37:46.446454   15338 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0709 16:37:46.446488   15338 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0709 16:37:46.492084   15338 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0709 16:37:46.492105   15338 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0709 16:37:46.748491   15338 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0709 16:37:46.776227   15338 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0709 16:37:46.776262   15338 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0709 16:37:46.979621   15338 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0709 16:37:46.979643   15338 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0709 16:37:47.009028   15338 pod_ready.go:92] pod "kube-proxy-59z6c" in "kube-system" namespace has status "Ready":"True"
	I0709 16:37:47.009049   15338 pod_ready.go:81] duration metric: took 2.006278551s for pod "kube-proxy-59z6c" in "kube-system" namespace to be "Ready" ...
	I0709 16:37:47.009057   15338 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-470383" in "kube-system" namespace to be "Ready" ...
	I0709 16:37:47.020126   15338 pod_ready.go:92] pod "kube-scheduler-addons-470383" in "kube-system" namespace has status "Ready":"True"
	I0709 16:37:47.020157   15338 pod_ready.go:81] duration metric: took 11.0935ms for pod "kube-scheduler-addons-470383" in "kube-system" namespace to be "Ready" ...
	I0709 16:37:47.020171   15338 pod_ready.go:38] duration metric: took 2.042030032s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0709 16:37:47.020193   15338 api_server.go:52] waiting for apiserver process to appear ...
	I0709 16:37:47.020251   15338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0709 16:37:47.280862   15338 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0709 16:37:47.280887   15338 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0709 16:37:47.308691   15338 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0709 16:37:47.308724   15338 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0709 16:37:47.460925   15338 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0709 16:37:47.460950   15338 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0709 16:37:47.472157   15338 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0709 16:37:47.644324   15338 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0709 16:37:47.644349   15338 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0709 16:37:47.659025   15338 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0709 16:37:47.957322   15338 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.891956388s)
	I0709 16:37:47.957322   15338 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.034900438s)
	I0709 16:37:47.957374   15338 main.go:141] libmachine: Making call to close driver server
	I0709 16:37:47.957385   15338 main.go:141] libmachine: (addons-470383) Calling .Close
	I0709 16:37:47.957391   15338 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0709 16:37:47.957725   15338 main.go:141] libmachine: (addons-470383) DBG | Closing plugin on server side
	I0709 16:37:47.957744   15338 main.go:141] libmachine: Successfully made call to close driver server
	I0709 16:37:47.957759   15338 main.go:141] libmachine: Making call to close connection to plugin binary
	I0709 16:37:47.957774   15338 main.go:141] libmachine: Making call to close driver server
	I0709 16:37:47.957813   15338 main.go:141] libmachine: (addons-470383) Calling .Close
	I0709 16:37:47.958087   15338 main.go:141] libmachine: (addons-470383) DBG | Closing plugin on server side
	I0709 16:37:47.958094   15338 main.go:141] libmachine: Successfully made call to close driver server
	I0709 16:37:47.958140   15338 main.go:141] libmachine: Making call to close connection to plugin binary
	I0709 16:37:47.988871   15338 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0709 16:37:47.988897   15338 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0709 16:37:48.280095   15338 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0709 16:37:48.280125   15338 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0709 16:37:48.437379   15338 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0709 16:37:48.437399   15338 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0709 16:37:48.460968   15338 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-470383" context rescaled to 1 replicas
	I0709 16:37:48.552123   15338 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0709 16:37:48.552161   15338 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0709 16:37:48.730106   15338 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.590893407s)
	I0709 16:37:48.730159   15338 main.go:141] libmachine: Making call to close driver server
	I0709 16:37:48.730172   15338 main.go:141] libmachine: (addons-470383) Calling .Close
	I0709 16:37:48.730440   15338 main.go:141] libmachine: Successfully made call to close driver server
	I0709 16:37:48.730534   15338 main.go:141] libmachine: Making call to close connection to plugin binary
	I0709 16:37:48.730510   15338 main.go:141] libmachine: (addons-470383) DBG | Closing plugin on server side
	I0709 16:37:48.730547   15338 main.go:141] libmachine: Making call to close driver server
	I0709 16:37:48.730596   15338 main.go:141] libmachine: (addons-470383) Calling .Close
	I0709 16:37:48.730902   15338 main.go:141] libmachine: Successfully made call to close driver server
	I0709 16:37:48.730908   15338 main.go:141] libmachine: (addons-470383) DBG | Closing plugin on server side
	I0709 16:37:48.730917   15338 main.go:141] libmachine: Making call to close connection to plugin binary
	I0709 16:37:48.743930   15338 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0709 16:37:48.779864   15338 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0709 16:37:48.779884   15338 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0709 16:37:48.955598   15338 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0709 16:37:48.955623   15338 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0709 16:37:49.475440   15338 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0709 16:37:51.667192   15338 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0709 16:37:51.667234   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHHostname
	I0709 16:37:51.669975   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:37:51.670334   15338 main.go:141] libmachine: (addons-470383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:ff:9e", ip: ""} in network mk-addons-470383: {Iface:virbr1 ExpiryTime:2024-07-09 17:36:56 +0000 UTC Type:0 Mac:52:54:00:21:ff:9e Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-470383 Clientid:01:52:54:00:21:ff:9e}
	I0709 16:37:51.670369   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined IP address 192.168.39.216 and MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:37:51.670496   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHPort
	I0709 16:37:51.670728   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHKeyPath
	I0709 16:37:51.670928   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHUsername
	I0709 16:37:51.671114   15338 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19199-7540/.minikube/machines/addons-470383/id_rsa Username:docker}
	I0709 16:37:52.649049   15338 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0709 16:37:53.018323   15338 addons.go:234] Setting addon gcp-auth=true in "addons-470383"
	I0709 16:37:53.018382   15338 host.go:66] Checking if "addons-470383" exists ...
	I0709 16:37:53.018710   15338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0709 16:37:53.018735   15338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0709 16:37:53.034456   15338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37517
	I0709 16:37:53.034922   15338 main.go:141] libmachine: () Calling .GetVersion
	I0709 16:37:53.035364   15338 main.go:141] libmachine: Using API Version  1
	I0709 16:37:53.035389   15338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0709 16:37:53.035699   15338 main.go:141] libmachine: () Calling .GetMachineName
	I0709 16:37:53.036268   15338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0709 16:37:53.036307   15338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0709 16:37:53.051445   15338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35927
	I0709 16:37:53.051897   15338 main.go:141] libmachine: () Calling .GetVersion
	I0709 16:37:53.052934   15338 main.go:141] libmachine: Using API Version  1
	I0709 16:37:53.052959   15338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0709 16:37:53.053381   15338 main.go:141] libmachine: () Calling .GetMachineName
	I0709 16:37:53.053583   15338 main.go:141] libmachine: (addons-470383) Calling .GetState
	I0709 16:37:53.055399   15338 main.go:141] libmachine: (addons-470383) Calling .DriverName
	I0709 16:37:53.055642   15338 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0709 16:37:53.055666   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHHostname
	I0709 16:37:53.057968   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:37:53.058332   15338 main.go:141] libmachine: (addons-470383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:ff:9e", ip: ""} in network mk-addons-470383: {Iface:virbr1 ExpiryTime:2024-07-09 17:36:56 +0000 UTC Type:0 Mac:52:54:00:21:ff:9e Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-470383 Clientid:01:52:54:00:21:ff:9e}
	I0709 16:37:53.058359   15338 main.go:141] libmachine: (addons-470383) DBG | domain addons-470383 has defined IP address 192.168.39.216 and MAC address 52:54:00:21:ff:9e in network mk-addons-470383
	I0709 16:37:53.058543   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHPort
	I0709 16:37:53.058691   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHKeyPath
	I0709 16:37:53.058821   15338 main.go:141] libmachine: (addons-470383) Calling .GetSSHUsername
	I0709 16:37:53.058946   15338 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19199-7540/.minikube/machines/addons-470383/id_rsa Username:docker}
	I0709 16:37:55.662217   15338 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (10.467081291s)
	I0709 16:37:55.662281   15338 main.go:141] libmachine: Making call to close driver server
	I0709 16:37:55.662277   15338 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.462461002s)
	I0709 16:37:55.662292   15338 main.go:141] libmachine: (addons-470383) Calling .Close
	I0709 16:37:55.662314   15338 main.go:141] libmachine: Making call to close driver server
	I0709 16:37:55.662327   15338 main.go:141] libmachine: (addons-470383) Calling .Close
	I0709 16:37:55.662365   15338 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (10.422343871s)
	I0709 16:37:55.662411   15338 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (10.422223403s)
	I0709 16:37:55.662438   15338 main.go:141] libmachine: Making call to close driver server
	I0709 16:37:55.662451   15338 main.go:141] libmachine: (addons-470383) Calling .Close
	I0709 16:37:55.662416   15338 main.go:141] libmachine: Making call to close driver server
	I0709 16:37:55.662499   15338 main.go:141] libmachine: (addons-470383) Calling .Close
	I0709 16:37:55.662446   15338 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (10.420297341s)
	I0709 16:37:55.662586   15338 main.go:141] libmachine: Making call to close driver server
	I0709 16:37:55.662622   15338 main.go:141] libmachine: (addons-470383) Calling .Close
	I0709 16:37:55.662674   15338 main.go:141] libmachine: (addons-470383) DBG | Closing plugin on server side
	I0709 16:37:55.662595   15338 main.go:141] libmachine: (addons-470383) DBG | Closing plugin on server side
	I0709 16:37:55.662647   15338 main.go:141] libmachine: Successfully made call to close driver server
	I0709 16:37:55.662699   15338 main.go:141] libmachine: Making call to close connection to plugin binary
	I0709 16:37:55.662707   15338 main.go:141] libmachine: Making call to close driver server
	I0709 16:37:55.662717   15338 main.go:141] libmachine: (addons-470383) Calling .Close
	I0709 16:37:55.662719   15338 main.go:141] libmachine: Successfully made call to close driver server
	I0709 16:37:55.662731   15338 main.go:141] libmachine: Making call to close connection to plugin binary
	I0709 16:37:55.662740   15338 main.go:141] libmachine: Making call to close driver server
	I0709 16:37:55.662749   15338 main.go:141] libmachine: (addons-470383) Calling .Close
	I0709 16:37:55.662798   15338 main.go:141] libmachine: (addons-470383) DBG | Closing plugin on server side
	I0709 16:37:55.662820   15338 main.go:141] libmachine: Successfully made call to close driver server
	I0709 16:37:55.662826   15338 main.go:141] libmachine: Making call to close connection to plugin binary
	I0709 16:37:55.662834   15338 main.go:141] libmachine: Making call to close driver server
	I0709 16:37:55.662840   15338 main.go:141] libmachine: (addons-470383) Calling .Close
	I0709 16:37:55.662890   15338 main.go:141] libmachine: (addons-470383) DBG | Closing plugin on server side
	I0709 16:37:55.662625   15338 main.go:141] libmachine: Successfully made call to close driver server
	I0709 16:37:55.662924   15338 main.go:141] libmachine: Making call to close connection to plugin binary
	I0709 16:37:55.662936   15338 main.go:141] libmachine: Making call to close driver server
	I0709 16:37:55.662943   15338 main.go:141] libmachine: (addons-470383) Calling .Close
	I0709 16:37:55.662963   15338 main.go:141] libmachine: Successfully made call to close driver server
	I0709 16:37:55.662971   15338 main.go:141] libmachine: Making call to close connection to plugin binary
	I0709 16:37:55.662993   15338 addons.go:475] Verifying addon ingress=true in "addons-470383"
	I0709 16:37:55.664336   15338 main.go:141] libmachine: (addons-470383) DBG | Closing plugin on server side
	I0709 16:37:55.664361   15338 main.go:141] libmachine: Successfully made call to close driver server
	I0709 16:37:55.664368   15338 main.go:141] libmachine: Making call to close connection to plugin binary
	I0709 16:37:55.664452   15338 main.go:141] libmachine: (addons-470383) DBG | Closing plugin on server side
	I0709 16:37:55.664457   15338 main.go:141] libmachine: Successfully made call to close driver server
	I0709 16:37:55.664469   15338 main.go:141] libmachine: Successfully made call to close driver server
	I0709 16:37:55.664477   15338 main.go:141] libmachine: Making call to close connection to plugin binary
	I0709 16:37:55.664491   15338 main.go:141] libmachine: (addons-470383) DBG | Closing plugin on server side
	I0709 16:37:55.664516   15338 main.go:141] libmachine: Successfully made call to close driver server
	I0709 16:37:55.664530   15338 main.go:141] libmachine: Making call to close connection to plugin binary
	I0709 16:37:55.664477   15338 main.go:141] libmachine: Making call to close connection to plugin binary
	I0709 16:37:55.664590   15338 main.go:141] libmachine: Making call to close driver server
	I0709 16:37:55.664599   15338 main.go:141] libmachine: (addons-470383) Calling .Close
	I0709 16:37:55.664882   15338 main.go:141] libmachine: (addons-470383) DBG | Closing plugin on server side
	I0709 16:37:55.664947   15338 main.go:141] libmachine: Successfully made call to close driver server
	I0709 16:37:55.664961   15338 main.go:141] libmachine: Making call to close connection to plugin binary
	I0709 16:37:55.666299   15338 out.go:177] * Verifying ingress addon...
	I0709 16:37:55.668644   15338 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0709 16:37:55.690081   15338 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0709 16:37:55.690108   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:37:55.695596   15338 main.go:141] libmachine: Making call to close driver server
	I0709 16:37:55.695621   15338 main.go:141] libmachine: (addons-470383) Calling .Close
	I0709 16:37:55.695891   15338 main.go:141] libmachine: Successfully made call to close driver server
	I0709 16:37:55.695911   15338 main.go:141] libmachine: Making call to close connection to plugin binary
	W0709 16:37:55.696011   15338 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0709 16:37:55.703704   15338 main.go:141] libmachine: Making call to close driver server
	I0709 16:37:55.703725   15338 main.go:141] libmachine: (addons-470383) Calling .Close
	I0709 16:37:55.703974   15338 main.go:141] libmachine: Successfully made call to close driver server
	I0709 16:37:55.703990   15338 main.go:141] libmachine: Making call to close connection to plugin binary
	I0709 16:37:56.217341   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:37:56.712650   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:37:57.190293   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:37:57.707831   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:37:58.232569   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:37:58.649628   15338 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (12.7395361s)
	I0709 16:37:58.649677   15338 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (12.349237028s)
	I0709 16:37:58.649679   15338 main.go:141] libmachine: Making call to close driver server
	I0709 16:37:58.649693   15338 main.go:141] libmachine: (addons-470383) Calling .Close
	I0709 16:37:58.649697   15338 main.go:141] libmachine: Making call to close driver server
	I0709 16:37:58.649729   15338 main.go:141] libmachine: (addons-470383) Calling .Close
	I0709 16:37:58.649781   15338 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (11.901243746s)
	I0709 16:37:58.649824   15338 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (11.177642573s)
	I0709 16:37:58.649787   15338 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (11.62952096s)
	I0709 16:37:58.649842   15338 main.go:141] libmachine: Making call to close driver server
	I0709 16:37:58.649846   15338 api_server.go:72] duration metric: took 14.182248532s to wait for apiserver process to appear ...
	I0709 16:37:58.649856   15338 api_server.go:88] waiting for apiserver healthz status ...
	I0709 16:37:58.649823   15338 main.go:141] libmachine: Making call to close driver server
	I0709 16:37:58.649869   15338 main.go:141] libmachine: (addons-470383) Calling .Close
	I0709 16:37:58.649875   15338 api_server.go:253] Checking apiserver healthz at https://192.168.39.216:8443/healthz ...
	I0709 16:37:58.649857   15338 main.go:141] libmachine: (addons-470383) Calling .Close
	I0709 16:37:58.649915   15338 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (10.99086341s)
	W0709 16:37:58.649940   15338 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0709 16:37:58.649963   15338 retry.go:31] will retry after 209.841604ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0709 16:37:58.650012   15338 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (9.90604949s)
	I0709 16:37:58.650035   15338 main.go:141] libmachine: Making call to close driver server
	I0709 16:37:58.650044   15338 main.go:141] libmachine: (addons-470383) Calling .Close
	I0709 16:37:58.650073   15338 main.go:141] libmachine: (addons-470383) DBG | Closing plugin on server side
	I0709 16:37:58.650092   15338 main.go:141] libmachine: (addons-470383) DBG | Closing plugin on server side
	I0709 16:37:58.650094   15338 main.go:141] libmachine: Successfully made call to close driver server
	I0709 16:37:58.650113   15338 main.go:141] libmachine: Making call to close connection to plugin binary
	I0709 16:37:58.650116   15338 main.go:141] libmachine: Successfully made call to close driver server
	I0709 16:37:58.650123   15338 main.go:141] libmachine: Making call to close driver server
	I0709 16:37:58.650123   15338 main.go:141] libmachine: Making call to close connection to plugin binary
	I0709 16:37:58.650131   15338 main.go:141] libmachine: (addons-470383) Calling .Close
	I0709 16:37:58.650133   15338 main.go:141] libmachine: Making call to close driver server
	I0709 16:37:58.650141   15338 main.go:141] libmachine: (addons-470383) Calling .Close
	I0709 16:37:58.650148   15338 main.go:141] libmachine: Successfully made call to close driver server
	I0709 16:37:58.650157   15338 main.go:141] libmachine: Making call to close connection to plugin binary
	I0709 16:37:58.650165   15338 main.go:141] libmachine: Making call to close driver server
	I0709 16:37:58.650172   15338 main.go:141] libmachine: (addons-470383) Calling .Close
	I0709 16:37:58.650221   15338 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (13.40672157s)
	I0709 16:37:58.650238   15338 main.go:141] libmachine: Making call to close driver server
	I0709 16:37:58.650246   15338 main.go:141] libmachine: (addons-470383) Calling .Close
	I0709 16:37:58.650581   15338 main.go:141] libmachine: (addons-470383) DBG | Closing plugin on server side
	I0709 16:37:58.650611   15338 main.go:141] libmachine: Successfully made call to close driver server
	I0709 16:37:58.650618   15338 main.go:141] libmachine: Making call to close connection to plugin binary
	I0709 16:37:58.650626   15338 main.go:141] libmachine: Making call to close driver server
	I0709 16:37:58.650633   15338 main.go:141] libmachine: (addons-470383) Calling .Close
	I0709 16:37:58.650676   15338 main.go:141] libmachine: (addons-470383) DBG | Closing plugin on server side
	I0709 16:37:58.650694   15338 main.go:141] libmachine: Successfully made call to close driver server
	I0709 16:37:58.650701   15338 main.go:141] libmachine: Making call to close connection to plugin binary
	I0709 16:37:58.650708   15338 main.go:141] libmachine: Making call to close driver server
	I0709 16:37:58.650714   15338 main.go:141] libmachine: (addons-470383) Calling .Close
	I0709 16:37:58.650747   15338 main.go:141] libmachine: (addons-470383) DBG | Closing plugin on server side
	I0709 16:37:58.650764   15338 main.go:141] libmachine: Successfully made call to close driver server
	I0709 16:37:58.650770   15338 main.go:141] libmachine: Making call to close connection to plugin binary
	I0709 16:37:58.650778   15338 addons.go:475] Verifying addon metrics-server=true in "addons-470383"
	I0709 16:37:58.650981   15338 main.go:141] libmachine: (addons-470383) DBG | Closing plugin on server side
	I0709 16:37:58.651009   15338 main.go:141] libmachine: Successfully made call to close driver server
	I0709 16:37:58.651016   15338 main.go:141] libmachine: Making call to close connection to plugin binary
	I0709 16:37:58.651023   15338 main.go:141] libmachine: Making call to close driver server
	I0709 16:37:58.651029   15338 main.go:141] libmachine: (addons-470383) Calling .Close
	I0709 16:37:58.651090   15338 main.go:141] libmachine: Successfully made call to close driver server
	I0709 16:37:58.651108   15338 main.go:141] libmachine: Making call to close connection to plugin binary
	I0709 16:37:58.651123   15338 addons.go:475] Verifying addon registry=true in "addons-470383"
	I0709 16:37:58.651323   15338 main.go:141] libmachine: (addons-470383) DBG | Closing plugin on server side
	I0709 16:37:58.651359   15338 main.go:141] libmachine: Successfully made call to close driver server
	I0709 16:37:58.651375   15338 main.go:141] libmachine: Making call to close connection to plugin binary
	I0709 16:37:58.652615   15338 main.go:141] libmachine: (addons-470383) DBG | Closing plugin on server side
	I0709 16:37:58.652642   15338 main.go:141] libmachine: Successfully made call to close driver server
	I0709 16:37:58.652649   15338 main.go:141] libmachine: Making call to close connection to plugin binary
	I0709 16:37:58.653190   15338 main.go:141] libmachine: (addons-470383) DBG | Closing plugin on server side
	I0709 16:37:58.653230   15338 main.go:141] libmachine: Successfully made call to close driver server
	I0709 16:37:58.653238   15338 main.go:141] libmachine: Making call to close connection to plugin binary
	I0709 16:37:58.653275   15338 main.go:141] libmachine: (addons-470383) DBG | Closing plugin on server side
	I0709 16:37:58.653330   15338 main.go:141] libmachine: Successfully made call to close driver server
	I0709 16:37:58.653394   15338 main.go:141] libmachine: Making call to close connection to plugin binary
	I0709 16:37:58.654053   15338 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-470383 service yakd-dashboard -n yakd-dashboard
	
	I0709 16:37:58.654066   15338 out.go:177] * Verifying registry addon...
	I0709 16:37:58.656337   15338 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0709 16:37:58.658373   15338 api_server.go:279] https://192.168.39.216:8443/healthz returned 200:
	ok
	I0709 16:37:58.659774   15338 api_server.go:141] control plane version: v1.30.2
	I0709 16:37:58.659791   15338 api_server.go:131] duration metric: took 9.927481ms to wait for apiserver health ...
	I0709 16:37:58.659799   15338 system_pods.go:43] waiting for kube-system pods to appear ...
	I0709 16:37:58.699649   15338 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0709 16:37:58.699672   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:37:58.699981   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:37:58.703204   15338 system_pods.go:59] 16 kube-system pods found
	I0709 16:37:58.703243   15338 system_pods.go:61] "coredns-7db6d8ff4d-2k7j9" [64f135e7-3f8c-4eb3-a77f-d04000b4e9bb] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
	I0709 16:37:58.703255   15338 system_pods.go:61] "coredns-7db6d8ff4d-zb54k" [2bef6679-497f-41e2-b3e7-e90a98243353] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0709 16:37:58.703267   15338 system_pods.go:61] "etcd-addons-470383" [18e4c111-8617-4763-abb7-d5b9bd6ed213] Running
	I0709 16:37:58.703279   15338 system_pods.go:61] "kube-apiserver-addons-470383" [76b460f7-78a7-41d1-92f9-02d09ffa9c05] Running
	I0709 16:37:58.703286   15338 system_pods.go:61] "kube-controller-manager-addons-470383" [2929fd5b-2db0-46f0-9bdc-71cf85cce085] Running
	I0709 16:37:58.703298   15338 system_pods.go:61] "kube-ingress-dns-minikube" [d09e54a1-5181-4e52-bc81-8c094edd3edd] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0709 16:37:58.703308   15338 system_pods.go:61] "kube-proxy-59z6c" [fae54860-ce85-4bce-a35a-dfdcc9685428] Running
	I0709 16:37:58.703315   15338 system_pods.go:61] "kube-scheduler-addons-470383" [499843ca-d394-4db7-9114-d14bcf62b6f6] Running
	I0709 16:37:58.703324   15338 system_pods.go:61] "metrics-server-c59844bb4-2s6bg" [b14f1175-baf5-4ce1-8f58-e7e12b70e88a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0709 16:37:58.703338   15338 system_pods.go:61] "nvidia-device-plugin-daemonset-fbldv" [000921d8-448e-426c-9e4b-7d8c82757189] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0709 16:37:58.703352   15338 system_pods.go:61] "registry-proxy-pqgzx" [f8b3ce45-0b02-4af7-8df1-31575bfe65f4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0709 16:37:58.703363   15338 system_pods.go:61] "registry-qw6rf" [2d4c0856-2f3e-42c9-96eb-940302a22b52] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0709 16:37:58.703376   15338 system_pods.go:61] "snapshot-controller-745499f584-2zwsm" [4354a3d0-6c7f-42e2-9b60-c2357a7e51b3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0709 16:37:58.703390   15338 system_pods.go:61] "snapshot-controller-745499f584-tflnq" [328cf0ec-e4df-4fe1-a27d-140d429e5b42] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0709 16:37:58.703400   15338 system_pods.go:61] "storage-provisioner" [59b01d1b-e2e8-4bd0-b1f2-1c083e562322] Running
	I0709 16:37:58.703458   15338 system_pods.go:61] "tiller-deploy-6677d64bcd-cgq7t" [22a78248-d085-4876-9a88-009128ffc0f0] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0709 16:37:58.703474   15338 system_pods.go:74] duration metric: took 43.668691ms to wait for pod list to return data ...
	I0709 16:37:58.703485   15338 default_sa.go:34] waiting for default service account to be created ...
	I0709 16:37:58.711669   15338 default_sa.go:45] found service account: "default"
	I0709 16:37:58.711687   15338 default_sa.go:55] duration metric: took 8.192626ms for default service account to be created ...
	I0709 16:37:58.711695   15338 system_pods.go:116] waiting for k8s-apps to be running ...
	I0709 16:37:58.724882   15338 system_pods.go:86] 16 kube-system pods found
	I0709 16:37:58.724905   15338 system_pods.go:89] "coredns-7db6d8ff4d-2k7j9" [64f135e7-3f8c-4eb3-a77f-d04000b4e9bb] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
	I0709 16:37:58.724913   15338 system_pods.go:89] "coredns-7db6d8ff4d-zb54k" [2bef6679-497f-41e2-b3e7-e90a98243353] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0709 16:37:58.724919   15338 system_pods.go:89] "etcd-addons-470383" [18e4c111-8617-4763-abb7-d5b9bd6ed213] Running
	I0709 16:37:58.724924   15338 system_pods.go:89] "kube-apiserver-addons-470383" [76b460f7-78a7-41d1-92f9-02d09ffa9c05] Running
	I0709 16:37:58.724930   15338 system_pods.go:89] "kube-controller-manager-addons-470383" [2929fd5b-2db0-46f0-9bdc-71cf85cce085] Running
	I0709 16:37:58.724938   15338 system_pods.go:89] "kube-ingress-dns-minikube" [d09e54a1-5181-4e52-bc81-8c094edd3edd] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0709 16:37:58.724947   15338 system_pods.go:89] "kube-proxy-59z6c" [fae54860-ce85-4bce-a35a-dfdcc9685428] Running
	I0709 16:37:58.724958   15338 system_pods.go:89] "kube-scheduler-addons-470383" [499843ca-d394-4db7-9114-d14bcf62b6f6] Running
	I0709 16:37:58.724967   15338 system_pods.go:89] "metrics-server-c59844bb4-2s6bg" [b14f1175-baf5-4ce1-8f58-e7e12b70e88a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0709 16:37:58.724982   15338 system_pods.go:89] "nvidia-device-plugin-daemonset-fbldv" [000921d8-448e-426c-9e4b-7d8c82757189] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0709 16:37:58.724988   15338 system_pods.go:89] "registry-proxy-pqgzx" [f8b3ce45-0b02-4af7-8df1-31575bfe65f4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0709 16:37:58.724995   15338 system_pods.go:89] "registry-qw6rf" [2d4c0856-2f3e-42c9-96eb-940302a22b52] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0709 16:37:58.725002   15338 system_pods.go:89] "snapshot-controller-745499f584-2zwsm" [4354a3d0-6c7f-42e2-9b60-c2357a7e51b3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0709 16:37:58.725008   15338 system_pods.go:89] "snapshot-controller-745499f584-tflnq" [328cf0ec-e4df-4fe1-a27d-140d429e5b42] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0709 16:37:58.725014   15338 system_pods.go:89] "storage-provisioner" [59b01d1b-e2e8-4bd0-b1f2-1c083e562322] Running
	I0709 16:37:58.725020   15338 system_pods.go:89] "tiller-deploy-6677d64bcd-cgq7t" [22a78248-d085-4876-9a88-009128ffc0f0] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0709 16:37:58.725026   15338 system_pods.go:126] duration metric: took 13.326798ms to wait for k8s-apps to be running ...
	I0709 16:37:58.725037   15338 system_svc.go:44] waiting for kubelet service to be running ....
	I0709 16:37:58.725086   15338 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0709 16:37:58.860389   15338 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0709 16:37:59.291261   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:37:59.319074   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:37:59.510750   15338 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (10.035251212s)
	I0709 16:37:59.510763   15338 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (6.455103874s)
	I0709 16:37:59.510830   15338 system_svc.go:56] duration metric: took 785.786248ms WaitForService to wait for kubelet
	I0709 16:37:59.510851   15338 kubeadm.go:576] duration metric: took 15.043252025s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0709 16:37:59.510885   15338 node_conditions.go:102] verifying NodePressure condition ...
	I0709 16:37:59.510804   15338 main.go:141] libmachine: Making call to close driver server
	I0709 16:37:59.510909   15338 main.go:141] libmachine: (addons-470383) Calling .Close
	I0709 16:37:59.511228   15338 main.go:141] libmachine: Successfully made call to close driver server
	I0709 16:37:59.511247   15338 main.go:141] libmachine: Making call to close connection to plugin binary
	I0709 16:37:59.511255   15338 main.go:141] libmachine: Making call to close driver server
	I0709 16:37:59.511261   15338 main.go:141] libmachine: (addons-470383) Calling .Close
	I0709 16:37:59.511477   15338 main.go:141] libmachine: Successfully made call to close driver server
	I0709 16:37:59.511498   15338 main.go:141] libmachine: Making call to close connection to plugin binary
	I0709 16:37:59.511507   15338 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-470383"
	I0709 16:37:59.512298   15338 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0709 16:37:59.513178   15338 out.go:177] * Verifying csi-hostpath-driver addon...
	I0709 16:37:59.514627   15338 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0709 16:37:59.515715   15338 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0709 16:37:59.515761   15338 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0709 16:37:59.515770   15338 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0709 16:37:59.542371   15338 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0709 16:37:59.542395   15338 node_conditions.go:123] node cpu capacity is 2
	I0709 16:37:59.542406   15338 node_conditions.go:105] duration metric: took 31.51601ms to run NodePressure ...
	I0709 16:37:59.542415   15338 start.go:240] waiting for startup goroutines ...
	I0709 16:37:59.551547   15338 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0709 16:37:59.551574   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:37:59.631688   15338 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0709 16:37:59.631707   15338 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0709 16:37:59.667128   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:37:59.675762   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:37:59.760207   15338 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0709 16:37:59.760225   15338 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0709 16:37:59.827568   15338 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0709 16:38:00.043190   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:00.196963   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:00.197453   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:00.521712   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:00.661351   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:00.672781   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:01.021943   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:01.076527   15338 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.216093365s)
	I0709 16:38:01.076579   15338 main.go:141] libmachine: Making call to close driver server
	I0709 16:38:01.076593   15338 main.go:141] libmachine: (addons-470383) Calling .Close
	I0709 16:38:01.076948   15338 main.go:141] libmachine: Successfully made call to close driver server
	I0709 16:38:01.076967   15338 main.go:141] libmachine: Making call to close connection to plugin binary
	I0709 16:38:01.076976   15338 main.go:141] libmachine: Making call to close driver server
	I0709 16:38:01.076980   15338 main.go:141] libmachine: (addons-470383) DBG | Closing plugin on server side
	I0709 16:38:01.076984   15338 main.go:141] libmachine: (addons-470383) Calling .Close
	I0709 16:38:01.077176   15338 main.go:141] libmachine: Successfully made call to close driver server
	I0709 16:38:01.077188   15338 main.go:141] libmachine: Making call to close connection to plugin binary
	I0709 16:38:01.169273   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:01.180191   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:01.332346   15338 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.504744743s)
	I0709 16:38:01.332402   15338 main.go:141] libmachine: Making call to close driver server
	I0709 16:38:01.332415   15338 main.go:141] libmachine: (addons-470383) Calling .Close
	I0709 16:38:01.332805   15338 main.go:141] libmachine: Successfully made call to close driver server
	I0709 16:38:01.332885   15338 main.go:141] libmachine: Making call to close connection to plugin binary
	I0709 16:38:01.332899   15338 main.go:141] libmachine: Making call to close driver server
	I0709 16:38:01.332914   15338 main.go:141] libmachine: (addons-470383) Calling .Close
	I0709 16:38:01.332856   15338 main.go:141] libmachine: (addons-470383) DBG | Closing plugin on server side
	I0709 16:38:01.333234   15338 main.go:141] libmachine: (addons-470383) DBG | Closing plugin on server side
	I0709 16:38:01.333265   15338 main.go:141] libmachine: Successfully made call to close driver server
	I0709 16:38:01.333277   15338 main.go:141] libmachine: Making call to close connection to plugin binary
	I0709 16:38:01.335433   15338 addons.go:475] Verifying addon gcp-auth=true in "addons-470383"
	I0709 16:38:01.336811   15338 out.go:177] * Verifying gcp-auth addon...
	I0709 16:38:01.339088   15338 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0709 16:38:01.351408   15338 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0709 16:38:01.521953   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:01.661672   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:01.672793   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:02.024605   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:02.161575   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:02.172925   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:02.520931   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:02.661581   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:02.673075   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:03.023939   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:03.160716   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:03.173235   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:03.520804   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:03.660776   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:03.672814   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:04.022184   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:04.161511   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:04.172796   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:04.521590   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:04.662026   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:04.673037   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:05.221725   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:05.221918   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:05.223891   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:05.523196   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:05.661425   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:05.672260   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:06.280635   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:06.281942   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:06.283487   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:06.520923   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:06.661658   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:06.672998   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:07.021490   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:07.160701   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:07.173232   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:07.522372   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:07.664423   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:07.675142   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:08.021831   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:08.161649   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:08.174694   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:08.522068   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:08.661273   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:08.672519   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:09.021668   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:09.160795   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:09.173033   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:09.521256   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:09.660984   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:09.672808   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:10.021367   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:10.160868   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:10.173159   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:10.739198   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:10.741036   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:10.743112   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:11.023726   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:11.161386   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:11.172576   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:11.521701   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:11.662559   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:11.675440   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:12.021861   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:12.160670   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:12.172718   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:12.520918   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:12.661634   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:12.676485   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:13.021360   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:13.160767   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:13.172644   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:13.520933   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:13.661336   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:13.672457   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:14.021413   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:14.161375   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:14.172235   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:14.525818   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:14.667975   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:14.675067   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:15.021235   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:15.161086   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:15.172782   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:15.520896   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:15.661061   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:15.671962   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:16.021102   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:16.161329   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:16.172544   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:16.526163   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:16.661442   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:16.672460   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:17.021691   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:17.160913   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:17.173419   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:17.520231   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:17.660882   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:17.673733   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:18.022077   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:18.160498   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:18.172410   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:18.680611   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:18.681937   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:18.682312   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:19.143528   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:19.161386   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:19.173678   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:19.521313   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:19.660503   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:19.672410   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:20.021036   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:20.175541   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:20.178386   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:20.520966   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:20.663156   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:20.674287   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:21.021795   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:21.161544   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:21.172890   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:21.520474   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:21.660637   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:21.673168   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:22.021146   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:22.162218   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:22.172288   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:22.521414   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:22.661197   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:22.673746   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:23.021903   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:23.161328   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:23.172872   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:23.522153   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:23.661445   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:23.673247   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:24.025877   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:24.162300   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:24.173452   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:24.523350   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:24.887978   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:24.889602   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:25.020706   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:25.161452   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:25.173599   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:25.521641   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:25.661688   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:25.672643   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:26.020676   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:26.161479   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:26.173345   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:26.521598   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:26.660843   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:26.672988   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:27.021281   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:27.161820   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:27.172933   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:27.521168   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:27.667118   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:27.673294   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:28.022739   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:28.162177   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:28.173859   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:28.521404   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:28.661441   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:28.672930   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:29.020929   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:29.161893   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:29.173076   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:29.520937   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:29.662784   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:29.674643   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:30.020959   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:30.162028   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:30.172761   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:30.521367   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:30.661757   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:30.672729   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:31.177880   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:31.180687   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:31.180804   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:31.521280   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:31.666512   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:31.672664   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:32.023994   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:32.163457   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:32.173094   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:32.521826   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:32.666687   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:32.675376   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:33.021245   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:33.161285   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:33.172165   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:33.521620   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:33.667012   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:33.677658   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:34.037262   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:34.160748   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:34.172502   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:34.525625   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:34.667128   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:34.672417   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:35.027138   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:35.161752   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:35.173318   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:35.520803   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:35.661886   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:35.673931   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:36.218399   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:36.218458   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:36.227497   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:36.521320   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:36.661235   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:36.672226   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:37.021969   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:37.161338   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:37.172571   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:37.521403   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:37.661713   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:37.673380   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:38.022361   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:38.162239   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:38.172858   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:38.521061   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:38.660253   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:38.672190   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:39.021731   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:39.161801   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:39.172840   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:39.520914   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:39.662193   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:39.673289   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:40.035346   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:40.162158   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:40.173474   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:40.521342   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:40.661516   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:40.673636   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:41.043869   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:41.163262   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:41.176330   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:41.522327   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:41.662121   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:41.672247   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:42.020996   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:42.161201   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:42.172204   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:42.521225   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:42.662049   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:42.672768   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:43.021762   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:43.161513   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:43.172644   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:43.522996   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:43.662531   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 16:38:43.674327   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:44.022100   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:44.162115   15338 kapi.go:107] duration metric: took 45.505777401s to wait for kubernetes.io/minikube-addons=registry ...
	I0709 16:38:44.171924   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:44.520993   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:44.674061   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:45.023999   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:45.173684   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:45.521840   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:45.673809   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:46.020735   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:46.173817   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:46.520909   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:46.673059   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:47.022334   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:47.173133   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:47.526361   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:47.675867   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:48.021161   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:48.173665   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:48.521225   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:48.675722   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:49.022416   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:49.173449   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:49.521445   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:49.673309   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:50.021977   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:50.173055   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:50.521567   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:50.673291   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:51.021763   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:51.173258   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:51.667400   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:51.677598   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:52.021420   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:52.177703   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:52.522629   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:52.675120   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:53.021527   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:53.173968   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:53.521766   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:53.673771   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:54.021826   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:54.173546   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:54.521917   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:54.672843   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:55.022907   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:55.173801   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:55.522705   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:55.676394   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:56.021168   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:56.173548   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:56.521300   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:56.690055   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:57.021167   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:57.173170   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:57.526956   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:57.674429   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:58.022988   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:58.173064   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:58.521835   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:58.673428   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:59.021712   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:59.174321   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:38:59.521848   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:38:59.673059   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:39:00.022223   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:39:00.174489   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:39:00.520767   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:39:00.673371   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:39:01.021260   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:39:01.173872   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:39:01.521456   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:39:01.675423   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:39:02.021669   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:39:02.176056   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:39:02.523688   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:39:02.673159   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:39:03.022044   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:39:03.174105   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:39:03.520640   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:39:03.673213   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:39:04.021842   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:39:04.173256   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:39:04.522603   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:39:04.675405   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:39:05.039316   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:39:05.176251   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:39:05.529166   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:39:05.692346   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:39:06.021890   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:39:06.175099   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:39:06.527583   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:39:06.674517   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:39:07.024147   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:39:07.183724   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:39:07.732491   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:39:07.734308   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:39:08.021571   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:39:08.172687   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:39:08.521261   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:39:08.673211   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:39:09.022873   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:39:09.172836   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:39:09.520796   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:39:09.673463   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:39:10.021198   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:39:10.173640   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:39:10.522891   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:39:10.673274   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:39:11.022869   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:39:11.172814   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:39:11.521626   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:39:11.673499   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:39:12.021844   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:39:12.175668   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:39:12.521326   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:39:12.673797   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:39:13.022135   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:39:13.179037   15338 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 16:39:13.522827   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:39:13.676822   15338 kapi.go:107] duration metric: took 1m18.008173367s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0709 16:39:14.023673   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:39:14.521141   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:39:15.022192   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:39:15.521861   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:39:16.030530   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:39:16.527519   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:39:17.023206   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:39:17.521736   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:39:18.021363   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:39:18.521192   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:39:19.023976   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:39:19.521714   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:39:20.022672   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:39:20.522107   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 16:39:21.023561   15338 kapi.go:107] duration metric: took 1m21.507877944s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0709 16:39:24.358305   15338 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0709 16:39:24.358338   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:39:24.843235   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:39:25.343901   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:39:25.843314   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:39:26.342996   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:39:26.842602   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:39:27.343039   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:39:27.841932   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:39:28.343141   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:39:28.842879   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:39:29.342353   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:39:29.842845   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:39:30.343290   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:39:30.844054   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:39:31.342812   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:39:31.844216   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:39:32.342631   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:39:32.843363   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:39:33.343878   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:39:33.843307   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:39:34.343758   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:39:34.843853   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:39:35.344278   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:39:35.843166   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:39:36.342829   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:39:36.843904   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:39:37.342773   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:39:37.843813   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:39:38.343355   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:39:38.844085   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:39:39.342643   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:39:39.843001   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:39:40.342789   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:39:40.843708   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:39:41.344429   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:39:41.842979   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:39:42.342471   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:39:42.843131   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:39:43.342724   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:39:43.844095   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:39:44.342302   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:39:44.843841   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:39:45.343346   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:39:45.842887   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:39:46.343446   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:39:46.842791   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:39:47.342625   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:39:47.843463   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:39:48.342769   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:39:48.844983   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:39:49.346085   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:39:49.842248   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:39:50.342634   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:39:50.843042   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:39:51.342644   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:39:51.843507   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:39:52.343261   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:39:52.844085   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:39:53.343265   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:39:53.843758   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:39:54.343456   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:39:54.843779   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:39:55.343287   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:39:55.846592   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:39:56.343982   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:39:56.842624   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:39:57.342657   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:39:57.844023   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:39:58.342953   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:39:58.842929   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:39:59.342441   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:39:59.847956   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:40:00.343243   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:40:00.842881   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:40:01.342214   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:40:01.843195   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:40:02.342717   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:40:02.843226   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:40:03.342635   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:40:03.843247   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:40:04.344586   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:40:04.843333   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:40:05.343086   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:40:05.842574   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:40:06.343410   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:40:06.843008   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:40:07.343389   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:40:07.842946   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:40:08.342517   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:40:08.843390   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:40:09.342683   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:40:09.843381   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:40:10.343138   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:40:10.842777   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:40:11.344018   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:40:11.842796   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:40:12.343309   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:40:12.843340   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:40:13.343364   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:40:13.842714   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:40:14.343739   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:40:14.843217   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:40:15.342677   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:40:15.843651   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:40:16.343491   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:40:16.843746   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:40:17.343926   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:40:17.843005   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:40:18.342708   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:40:18.843435   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:40:19.342649   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:40:19.843382   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:40:20.343402   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:40:20.843042   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:40:21.343049   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:40:21.845284   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:40:22.343470   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:40:22.843027   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:40:23.342695   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:40:23.843739   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:40:24.344563   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:40:24.843313   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:40:25.342796   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:40:25.843629   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:40:26.343757   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:40:26.843648   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:40:27.342775   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:40:27.844065   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:40:28.343056   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:40:28.842903   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:40:29.343236   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:40:29.842599   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:40:30.343984   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:40:30.842953   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:40:31.342251   15338 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 16:40:31.842712   15338 kapi.go:107] duration metric: took 2m30.503622683s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0709 16:40:31.844736   15338 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-470383 cluster.
	I0709 16:40:31.846138   15338 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0709 16:40:31.847845   15338 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0709 16:40:31.849172   15338 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, ingress-dns, storage-provisioner, storage-provisioner-rancher, metrics-server, inspektor-gadget, volcano, helm-tiller, yakd, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0709 16:40:31.850367   15338 addons.go:510] duration metric: took 2m47.382764808s for enable addons: enabled=[nvidia-device-plugin cloud-spanner ingress-dns storage-provisioner storage-provisioner-rancher metrics-server inspektor-gadget volcano helm-tiller yakd volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0709 16:40:31.850408   15338 start.go:245] waiting for cluster config update ...
	I0709 16:40:31.850431   15338 start.go:254] writing updated cluster config ...
	I0709 16:40:31.850672   15338 ssh_runner.go:195] Run: rm -f paused
	I0709 16:40:31.899874   15338 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0709 16:40:31.901722   15338 out.go:177] * Done! kubectl is now configured to use "addons-470383" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jul 09 16:41:47 addons-470383 dockerd[1202]: time="2024-07-09T16:41:47.947818934Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 09 16:41:48 addons-470383 dockerd[1195]: time="2024-07-09T16:41:48.012462865Z" level=info msg="ignoring event" container=86cf40bf829ed7614ace2a6f4a378dcb745768fc86a1e9af7a6067b002b0e64e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 09 16:41:48 addons-470383 dockerd[1202]: time="2024-07-09T16:41:48.015622235Z" level=info msg="shim disconnected" id=86cf40bf829ed7614ace2a6f4a378dcb745768fc86a1e9af7a6067b002b0e64e namespace=moby
	Jul 09 16:41:48 addons-470383 dockerd[1202]: time="2024-07-09T16:41:48.016824029Z" level=warning msg="cleaning up after shim disconnected" id=86cf40bf829ed7614ace2a6f4a378dcb745768fc86a1e9af7a6067b002b0e64e namespace=moby
	Jul 09 16:41:48 addons-470383 dockerd[1202]: time="2024-07-09T16:41:48.016962351Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 09 16:41:48 addons-470383 dockerd[1202]: time="2024-07-09T16:41:48.066043793Z" level=info msg="shim disconnected" id=7b5dd9482cea2ae0ae999e0024b241172ff5ede82cc06ea680afba4674aa492c namespace=moby
	Jul 09 16:41:48 addons-470383 dockerd[1202]: time="2024-07-09T16:41:48.066114729Z" level=warning msg="cleaning up after shim disconnected" id=7b5dd9482cea2ae0ae999e0024b241172ff5ede82cc06ea680afba4674aa492c namespace=moby
	Jul 09 16:41:48 addons-470383 dockerd[1202]: time="2024-07-09T16:41:48.066124440Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 09 16:41:48 addons-470383 dockerd[1195]: time="2024-07-09T16:41:48.066977025Z" level=info msg="ignoring event" container=7b5dd9482cea2ae0ae999e0024b241172ff5ede82cc06ea680afba4674aa492c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 09 16:41:54 addons-470383 dockerd[1195]: time="2024-07-09T16:41:54.067195009Z" level=info msg="ignoring event" container=9d90fb6063c46fb46a97744392aaa15086efc2daa560374c2add59b6d1fa561a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 09 16:41:54 addons-470383 dockerd[1202]: time="2024-07-09T16:41:54.068527467Z" level=info msg="shim disconnected" id=9d90fb6063c46fb46a97744392aaa15086efc2daa560374c2add59b6d1fa561a namespace=moby
	Jul 09 16:41:54 addons-470383 dockerd[1202]: time="2024-07-09T16:41:54.068611765Z" level=warning msg="cleaning up after shim disconnected" id=9d90fb6063c46fb46a97744392aaa15086efc2daa560374c2add59b6d1fa561a namespace=moby
	Jul 09 16:41:54 addons-470383 dockerd[1202]: time="2024-07-09T16:41:54.068621093Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 09 16:41:54 addons-470383 dockerd[1195]: time="2024-07-09T16:41:54.080673937Z" level=info msg="ignoring event" container=e6e1aad175441259debee29a0c8604bbc6a863a2190bceaef53dea84b7cc4639 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 09 16:41:54 addons-470383 dockerd[1202]: time="2024-07-09T16:41:54.085453615Z" level=info msg="shim disconnected" id=e6e1aad175441259debee29a0c8604bbc6a863a2190bceaef53dea84b7cc4639 namespace=moby
	Jul 09 16:41:54 addons-470383 dockerd[1202]: time="2024-07-09T16:41:54.085629830Z" level=warning msg="cleaning up after shim disconnected" id=e6e1aad175441259debee29a0c8604bbc6a863a2190bceaef53dea84b7cc4639 namespace=moby
	Jul 09 16:41:54 addons-470383 dockerd[1202]: time="2024-07-09T16:41:54.085685337Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 09 16:41:54 addons-470383 dockerd[1195]: time="2024-07-09T16:41:54.250778322Z" level=info msg="ignoring event" container=6ec4555f31137188f4b83f6459779924c5fb548e176916e45ae43d41d6629108 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 09 16:41:54 addons-470383 dockerd[1202]: time="2024-07-09T16:41:54.251827869Z" level=info msg="shim disconnected" id=6ec4555f31137188f4b83f6459779924c5fb548e176916e45ae43d41d6629108 namespace=moby
	Jul 09 16:41:54 addons-470383 dockerd[1202]: time="2024-07-09T16:41:54.252301571Z" level=warning msg="cleaning up after shim disconnected" id=6ec4555f31137188f4b83f6459779924c5fb548e176916e45ae43d41d6629108 namespace=moby
	Jul 09 16:41:54 addons-470383 dockerd[1202]: time="2024-07-09T16:41:54.252444014Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 09 16:41:54 addons-470383 dockerd[1202]: time="2024-07-09T16:41:54.258973979Z" level=info msg="shim disconnected" id=01f5998a5d1447796d065026f770a61d0dacc739a173a80fac961082666c6d89 namespace=moby
	Jul 09 16:41:54 addons-470383 dockerd[1202]: time="2024-07-09T16:41:54.259035152Z" level=warning msg="cleaning up after shim disconnected" id=01f5998a5d1447796d065026f770a61d0dacc739a173a80fac961082666c6d89 namespace=moby
	Jul 09 16:41:54 addons-470383 dockerd[1202]: time="2024-07-09T16:41:54.259045326Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 09 16:41:54 addons-470383 dockerd[1195]: time="2024-07-09T16:41:54.259511614Z" level=info msg="ignoring event" container=01f5998a5d1447796d065026f770a61d0dacc739a173a80fac961082666c6d89 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c46923f00e364       ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37                        8 minutes ago       Running             headlamp                  0                   3ce1737683b54       headlamp-7867546754-fwm7f
	81d87583eed1b       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 8 minutes ago       Running             gcp-auth                  0                   a2ca9a2363495       gcp-auth-5db96cd9b4-tx2nb
	52fa6f3206f1f       registry.k8s.io/ingress-nginx/controller@sha256:e24f39d3eed6bcc239a56f20098878845f62baa34b9f2be2fd2c38ce9fb0f29e             10 minutes ago      Running             controller                0                   16b0ab714b021       ingress-nginx-controller-768f948f8f-b7647
	9d7a704dc52f0       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:36d05b4077fb8e3d13663702fa337f124675ba8667cbd949c03a8e8ea6fa4366   10 minutes ago      Exited              patch                     0                   9ed700b22ac1e       ingress-nginx-admission-patch-nvnwq
	f8c6be5981a81       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:36d05b4077fb8e3d13663702fa337f124675ba8667cbd949c03a8e8ea6fa4366   10 minutes ago      Exited              create                    0                   866e5fd354d91       ingress-nginx-admission-create-jcxj8
	41d9ecfd7ec3e       marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624                                        10 minutes ago      Running             yakd                      0                   ae474b688b2bb       yakd-dashboard-799879c74f-wmgc5
	436a4c182747b       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c             10 minutes ago      Running             minikube-ingress-dns      0                   bc4b3abde71f1       kube-ingress-dns-minikube
	63791a5036355       6e38f40d628db                                                                                                                11 minutes ago      Running             storage-provisioner       0                   0018ef4a4d1cd       storage-provisioner
	61b5286d6564f       cbb01a7bd410d                                                                                                                11 minutes ago      Running             coredns                   0                   167385efdad42       coredns-7db6d8ff4d-zb54k
	56c89c638fd0f       53c535741fb44                                                                                                                11 minutes ago      Running             kube-proxy                0                   7913a596c2886       kube-proxy-59z6c
	b276ce256923e       3861cfcd7c04c                                                                                                                11 minutes ago      Running             etcd                      0                   cc009f49e88b2       etcd-addons-470383
	ace02a99e9235       7820c83aa1394                                                                                                                11 minutes ago      Running             kube-scheduler            0                   c615f8fa8d234       kube-scheduler-addons-470383
	d4d3086f651a1       56ce0fd9fb532                                                                                                                11 minutes ago      Running             kube-apiserver            0                   6e221432fe860       kube-apiserver-addons-470383
	62cb6767f140c       e874818b3caac                                                                                                                11 minutes ago      Running             kube-controller-manager   0                   3a3e2ee33fd9e       kube-controller-manager-addons-470383
	
	
	==> controller_ingress [52fa6f3206f1] <==
	I0709 16:39:13.582203       8 event.go:364] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"10c9fe14-858d-44b4-81a7-1baba05a75b7", APIVersion:"v1", ResourceVersion:"721", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0709 16:39:14.739502       8 nginx.go:307] "Starting NGINX process"
	I0709 16:39:14.739843       8 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0709 16:39:14.740289       8 nginx.go:327] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0709 16:39:14.741743       8 controller.go:190] "Configuration changes detected, backend reload required"
	I0709 16:39:14.756613       8 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0709 16:39:14.757026       8 status.go:84] "New leader elected" identity="ingress-nginx-controller-768f948f8f-b7647"
	I0709 16:39:14.776353       8 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-768f948f8f-b7647" node="addons-470383"
	I0709 16:39:14.841964       8 controller.go:210] "Backend successfully reloaded"
	I0709 16:39:14.842057       8 controller.go:221] "Initial sync, sleeping for 1 second"
	I0709 16:39:14.842253       8 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-768f948f8f-b7647", UID:"8296bd2b-5b08-4b67-b996-cf07ed9493c6", APIVersion:"v1", ResourceVersion:"1286", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W0709 16:41:12.850822       8 controller.go:1107] Error obtaining Endpoints for Service "default/nginx": no object matching key "default/nginx" in local store
	I0709 16:41:12.939903       8 admission.go:149] processed ingress via admission controller {testedIngressLength:1 testedIngressTime:0.089s renderingIngressLength:1 renderingIngressTime:0.002s admissionTime:0.091s testedConfigurationSize:18.1kB}
	I0709 16:41:12.939980       8 main.go:107] "successfully validated configuration, accepting" ingress="default/nginx-ingress"
	I0709 16:41:13.035040       8 store.go:440] "Found valid IngressClass" ingress="default/nginx-ingress" ingressclass="nginx"
	I0709 16:41:13.038142       8 event.go:364] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"9d0821ca-e3e2-4f3c-bb32-fd6531e7a47c", APIVersion:"networking.k8s.io/v1", ResourceVersion:"1919", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	I0709 16:41:14.798759       8 status.go:304] "updating Ingress status" namespace="default" ingress="nginx-ingress" currentValue=null newValue=[{"ip":"192.168.39.216"}]
	I0709 16:41:14.808930       8 event.go:364] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"9d0821ca-e3e2-4f3c-bb32-fd6531e7a47c", APIVersion:"networking.k8s.io/v1", ResourceVersion:"1989", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	W0709 16:41:15.685148       8 controller.go:1213] Service "default/nginx" does not have any active Endpoint.
	I0709 16:41:15.685280       8 controller.go:190] "Configuration changes detected, backend reload required"
	I0709 16:41:15.733372       8 controller.go:210] "Backend successfully reloaded"
	I0709 16:41:15.734369       8 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-768f948f8f-b7647", UID:"8296bd2b-5b08-4b67-b996-cf07ed9493c6", APIVersion:"v1", ResourceVersion:"1286", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W0709 16:41:19.018517       8 controller.go:1213] Service "default/nginx" does not have any active Endpoint.
	W0709 16:41:47.281179       8 controller.go:1213] Service "default/nginx" does not have any active Endpoint.
	W0709 16:41:50.614788       8 controller.go:1213] Service "default/nginx" does not have any active Endpoint.
	
	
	==> coredns [61b5286d6564] <==
	[INFO] 10.244.0.6:50597 - 59290 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000119612s
	[INFO] 10.244.0.6:35005 - 61693 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000046627s
	[INFO] 10.244.0.6:35005 - 47615 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00020843s
	[INFO] 10.244.0.6:58533 - 37734 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000188647s
	[INFO] 10.244.0.6:58533 - 59488 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000063202s
	[INFO] 10.244.0.6:43964 - 49909 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00023081s
	[INFO] 10.244.0.6:43964 - 1012 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00027609s
	[INFO] 10.244.0.6:48855 - 30192 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000188392s
	[INFO] 10.244.0.6:48855 - 25342 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000077838s
	[INFO] 10.244.0.6:60081 - 37187 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000107294s
	[INFO] 10.244.0.6:60081 - 27981 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000055768s
	[INFO] 10.244.0.6:46496 - 3383 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000059213s
	[INFO] 10.244.0.6:46496 - 17969 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000221709s
	[INFO] 10.244.0.6:51977 - 40973 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000052917s
	[INFO] 10.244.0.6:51977 - 10255 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000184889s
	[INFO] 10.244.0.26:57892 - 51142 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000540417s
	[INFO] 10.244.0.26:57763 - 4067 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000089348s
	[INFO] 10.244.0.26:59403 - 39048 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000114238s
	[INFO] 10.244.0.26:41936 - 4330 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000084631s
	[INFO] 10.244.0.26:40704 - 38271 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000078246s
	[INFO] 10.244.0.26:52072 - 27510 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000048731s
	[INFO] 10.244.0.26:59872 - 38832 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001834402s
	[INFO] 10.244.0.26:50853 - 32466 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.00071499s
	[INFO] 10.244.0.29:53589 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000530914s
	[INFO] 10.244.0.29:33022 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000100133s
	
	
	==> describe nodes <==
	Name:               addons-470383
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-470383
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=735571997edb61950a92942d429109b921865fd8
	                    minikube.k8s.io/name=addons-470383
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_09T16_37_31_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-470383
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 09 Jul 2024 16:37:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-470383
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 09 Jul 2024 16:49:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 09 Jul 2024 16:46:41 +0000   Tue, 09 Jul 2024 16:37:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 09 Jul 2024 16:46:41 +0000   Tue, 09 Jul 2024 16:37:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 09 Jul 2024 16:46:41 +0000   Tue, 09 Jul 2024 16:37:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 09 Jul 2024 16:46:41 +0000   Tue, 09 Jul 2024 16:37:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.216
	  Hostname:    addons-470383
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 527cbe90c4c040a59f868c54222b0c84
	  System UUID:                527cbe90-c4c0-40a5-9f86-8c54222b0c84
	  Boot ID:                    10254298-b30e-4f7a-b72a-920bb1bdd0f4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  gcp-auth                    gcp-auth-5db96cd9b4-tx2nb                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m50s
	  headlamp                    headlamp-7867546754-fwm7f                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m42s
	  ingress-nginx               ingress-nginx-controller-768f948f8f-b7647    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (2%!)(MISSING)        0 (0%!)(MISSING)         11m
	  kube-system                 coredns-7db6d8ff4d-zb54k                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     11m
	  kube-system                 etcd-addons-470383                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kube-apiserver-addons-470383                 250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-addons-470383        200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-ingress-dns-minikube                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-59z6c                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-addons-470383                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  yakd-dashboard              yakd-dashboard-799879c74f-wmgc5              0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             388Mi (10%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11m                kube-proxy       
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node addons-470383 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node addons-470383 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node addons-470383 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  11m                kubelet          Node addons-470383 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m                kubelet          Node addons-470383 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m                kubelet          Node addons-470383 status is now: NodeHasSufficientPID
	  Normal  NodeReady                11m                kubelet          Node addons-470383 status is now: NodeReady
	  Normal  RegisteredNode           11m                node-controller  Node addons-470383 event: Registered Node addons-470383 in Controller
	
	
	==> dmesg <==
	[  +5.042845] kauditd_printk_skb: 96 callbacks suppressed
	[  +5.053133] kauditd_printk_skb: 102 callbacks suppressed
	[Jul 9 16:38] kauditd_printk_skb: 36 callbacks suppressed
	[ +12.858883] kauditd_printk_skb: 2 callbacks suppressed
	[ +17.102251] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.114318] kauditd_printk_skb: 4 callbacks suppressed
	[  +6.189328] kauditd_printk_skb: 35 callbacks suppressed
	[  +6.386886] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.524402] kauditd_printk_skb: 28 callbacks suppressed
	[Jul 9 16:39] kauditd_printk_skb: 39 callbacks suppressed
	[  +8.393209] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.916176] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.105835] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.654644] kauditd_printk_skb: 19 callbacks suppressed
	[Jul 9 16:40] kauditd_printk_skb: 24 callbacks suppressed
	[ +23.868261] kauditd_printk_skb: 40 callbacks suppressed
	[ +10.261744] kauditd_printk_skb: 25 callbacks suppressed
	[  +5.259408] kauditd_printk_skb: 59 callbacks suppressed
	[  +6.396745] kauditd_printk_skb: 76 callbacks suppressed
	[  +5.328143] kauditd_printk_skb: 27 callbacks suppressed
	[Jul 9 16:41] kauditd_printk_skb: 35 callbacks suppressed
	[  +6.578988] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.077732] kauditd_printk_skb: 30 callbacks suppressed
	[ +28.142401] kauditd_printk_skb: 15 callbacks suppressed
	[  +8.445395] kauditd_printk_skb: 33 callbacks suppressed
	
	
	==> etcd [b276ce256923] <==
	{"level":"warn","ts":"2024-07-09T16:39:07.71909Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"211.66727ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:86395"}
	{"level":"info","ts":"2024-07-09T16:39:07.719148Z","caller":"traceutil/trace.go:171","msg":"trace[1661190088] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1263; }","duration":"211.773235ms","start":"2024-07-09T16:39:07.507367Z","end":"2024-07-09T16:39:07.71914Z","steps":["trace[1661190088] 'agreement among raft nodes before linearized reading'  (duration: 211.506765ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-09T16:39:18.896828Z","caller":"traceutil/trace.go:171","msg":"trace[816492857] transaction","detail":"{read_only:false; response_revision:1324; number_of_response:1; }","duration":"128.791657ms","start":"2024-07-09T16:39:18.76801Z","end":"2024-07-09T16:39:18.896802Z","steps":["trace[816492857] 'process raft request'  (duration: 126.042572ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-09T16:40:38.72295Z","caller":"traceutil/trace.go:171","msg":"trace[640441226] linearizableReadLoop","detail":"{readStateIndex:1646; appliedIndex:1645; }","duration":"228.197219ms","start":"2024-07-09T16:40:38.494708Z","end":"2024-07-09T16:40:38.722905Z","steps":["trace[640441226] 'read index received'  (duration: 226.619295ms)","trace[640441226] 'applied index is now lower than readState.Index'  (duration: 1.577296ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-09T16:40:38.723181Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"228.43938ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/nvidia-device-plugin-daemonset-fbldv.17e098cde1e48185\" ","response":"range_response_count:1 size:859"}
	{"level":"info","ts":"2024-07-09T16:40:38.723205Z","caller":"traceutil/trace.go:171","msg":"trace[1446433037] range","detail":"{range_begin:/registry/events/kube-system/nvidia-device-plugin-daemonset-fbldv.17e098cde1e48185; range_end:; response_count:1; response_revision:1583; }","duration":"228.53304ms","start":"2024-07-09T16:40:38.494664Z","end":"2024-07-09T16:40:38.723197Z","steps":["trace[1446433037] 'agreement among raft nodes before linearized reading'  (duration: 228.360058ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-09T16:40:38.723374Z","caller":"traceutil/trace.go:171","msg":"trace[1588454026] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1583; }","duration":"270.110052ms","start":"2024-07-09T16:40:38.453258Z","end":"2024-07-09T16:40:38.723368Z","steps":["trace[1588454026] 'process raft request'  (duration: 268.148768ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-09T16:41:00.274028Z","caller":"traceutil/trace.go:171","msg":"trace[1583376429] linearizableReadLoop","detail":"{readStateIndex:1867; appliedIndex:1865; }","duration":"182.835975ms","start":"2024-07-09T16:41:00.091149Z","end":"2024-07-09T16:41:00.273984Z","steps":["trace[1583376429] 'read index received'  (duration: 9.65931ms)","trace[1583376429] 'applied index is now lower than readState.Index'  (duration: 173.175692ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-09T16:41:00.274319Z","caller":"traceutil/trace.go:171","msg":"trace[2124022961] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1796; }","duration":"185.878925ms","start":"2024-07-09T16:41:00.08842Z","end":"2024-07-09T16:41:00.274299Z","steps":["trace[2124022961] 'process raft request'  (duration: 155.361162ms)","trace[2124022961] 'compare'  (duration: 29.951709ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-09T16:41:00.274735Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"183.562034ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gadget/gadget-fns5j\" ","response":"range_response_count:1 size:10076"}
	{"level":"info","ts":"2024-07-09T16:41:00.274782Z","caller":"traceutil/trace.go:171","msg":"trace[584599900] range","detail":"{range_begin:/registry/pods/gadget/gadget-fns5j; range_end:; response_count:1; response_revision:1796; }","duration":"183.632944ms","start":"2024-07-09T16:41:00.091132Z","end":"2024-07-09T16:41:00.274765Z","steps":["trace[584599900] 'agreement among raft nodes before linearized reading'  (duration: 183.490053ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-09T16:41:00.275173Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"140.24104ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/my-volcano/\" range_end:\"/registry/pods/my-volcano0\" ","response":"range_response_count:1 size:3735"}
	{"level":"info","ts":"2024-07-09T16:41:00.275202Z","caller":"traceutil/trace.go:171","msg":"trace[1087075118] range","detail":"{range_begin:/registry/pods/my-volcano/; range_end:/registry/pods/my-volcano0; response_count:1; response_revision:1796; }","duration":"140.302642ms","start":"2024-07-09T16:41:00.134891Z","end":"2024-07-09T16:41:00.275193Z","steps":["trace[1087075118] 'agreement among raft nodes before linearized reading'  (duration: 140.199862ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-09T16:41:00.275337Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.007968ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/\" range_end:\"/registry/services/endpoints0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-07-09T16:41:00.275361Z","caller":"traceutil/trace.go:171","msg":"trace[694539863] range","detail":"{range_begin:/registry/services/endpoints/; range_end:/registry/services/endpoints0; response_count:0; response_revision:1796; }","duration":"125.065773ms","start":"2024-07-09T16:41:00.150288Z","end":"2024-07-09T16:41:00.275354Z","steps":["trace[694539863] 'agreement among raft nodes before linearized reading'  (duration: 125.006635ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-09T16:41:00.275774Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"183.881894ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllerrevisions/gadget/gadget-698d89675c\" ","response":"range_response_count:1 size:6899"}
	{"level":"info","ts":"2024-07-09T16:41:00.276742Z","caller":"traceutil/trace.go:171","msg":"trace[1854035076] range","detail":"{range_begin:/registry/controllerrevisions/gadget/gadget-698d89675c; range_end:; response_count:1; response_revision:1796; }","duration":"184.360125ms","start":"2024-07-09T16:41:00.091865Z","end":"2024-07-09T16:41:00.276225Z","steps":["trace[1854035076] 'agreement among raft nodes before linearized reading'  (duration: 182.853117ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-09T16:41:00.279363Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"154.406682ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/default/hpvc\" ","response":"range_response_count:1 size:822"}
	{"level":"info","ts":"2024-07-09T16:41:00.279765Z","caller":"traceutil/trace.go:171","msg":"trace[872817408] range","detail":"{range_begin:/registry/persistentvolumeclaims/default/hpvc; range_end:; response_count:1; response_revision:1796; }","duration":"155.63744ms","start":"2024-07-09T16:41:00.124114Z","end":"2024-07-09T16:41:00.279751Z","steps":["trace[872817408] 'agreement among raft nodes before linearized reading'  (duration: 153.985602ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-09T16:41:02.427375Z","caller":"traceutil/trace.go:171","msg":"trace[764484405] transaction","detail":"{read_only:false; response_revision:1808; number_of_response:1; }","duration":"118.512134ms","start":"2024-07-09T16:41:02.30885Z","end":"2024-07-09T16:41:02.427362Z","steps":["trace[764484405] 'process raft request'  (duration: 118.305944ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-09T16:41:02.426777Z","caller":"traceutil/trace.go:171","msg":"trace[1320149220] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1807; }","duration":"127.920285ms","start":"2024-07-09T16:41:02.298828Z","end":"2024-07-09T16:41:02.426748Z","steps":["trace[1320149220] 'process raft request'  (duration: 127.436685ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-09T16:41:05.63445Z","caller":"traceutil/trace.go:171","msg":"trace[188577071] transaction","detail":"{read_only:false; response_revision:1845; number_of_response:1; }","duration":"238.824988ms","start":"2024-07-09T16:41:05.39561Z","end":"2024-07-09T16:41:05.634435Z","steps":["trace[188577071] 'process raft request'  (duration: 238.730757ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-09T16:47:26.076642Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2297}
	{"level":"info","ts":"2024-07-09T16:47:26.211721Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":2297,"took":"133.575814ms","hash":1938867809,"current-db-size-bytes":10940416,"current-db-size":"11 MB","current-db-size-in-use-bytes":2596864,"current-db-size-in-use":"2.6 MB"}
	{"level":"info","ts":"2024-07-09T16:47:26.21183Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1938867809,"revision":2297,"compact-revision":-1}
	
	
	==> gcp-auth [81d87583eed1] <==
	2024/07/09 16:40:32 Ready to write response ...
	2024/07/09 16:40:32 Ready to marshal response ...
	2024/07/09 16:40:32 Ready to write response ...
	2024/07/09 16:40:32 Ready to marshal response ...
	2024/07/09 16:40:32 Ready to write response ...
	2024/07/09 16:40:38 Ready to marshal response ...
	2024/07/09 16:40:38 Ready to write response ...
	2024/07/09 16:40:44 Ready to marshal response ...
	2024/07/09 16:40:44 Ready to write response ...
	2024/07/09 16:40:46 Ready to marshal response ...
	2024/07/09 16:40:46 Ready to write response ...
	2024/07/09 16:40:46 Ready to marshal response ...
	2024/07/09 16:40:46 Ready to write response ...
	2024/07/09 16:40:53 Ready to marshal response ...
	2024/07/09 16:40:53 Ready to write response ...
	2024/07/09 16:40:55 Ready to marshal response ...
	2024/07/09 16:40:55 Ready to write response ...
	2024/07/09 16:40:56 Ready to marshal response ...
	2024/07/09 16:40:56 Ready to write response ...
	2024/07/09 16:41:08 Ready to marshal response ...
	2024/07/09 16:41:08 Ready to write response ...
	2024/07/09 16:41:13 Ready to marshal response ...
	2024/07/09 16:41:13 Ready to write response ...
	2024/07/09 16:41:38 Ready to marshal response ...
	2024/07/09 16:41:38 Ready to write response ...
	
	
	==> kernel <==
	 16:49:14 up 12 min,  0 users,  load average: 0.44, 0.67, 0.62
	Linux addons-470383 5.10.207 #1 SMP Mon Jul 8 14:53:58 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [d4d3086f651a] <==
	I0709 16:41:14.009036       1 handler.go:286] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
	I0709 16:41:14.318140       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0709 16:41:14.503857       1 cacher.go:168] Terminating all watchers from cacher commands.bus.volcano.sh
	I0709 16:41:14.535447       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0709 16:41:14.774825       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0709 16:41:14.864125       1 cacher.go:168] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0709 16:41:15.013550       1 cacher.go:168] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0709 16:41:15.070963       1 cacher.go:168] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0709 16:41:15.101112       1 cacher.go:168] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0709 16:41:15.776477       1 cacher.go:168] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0709 16:41:15.976946       1 cacher.go:168] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0709 16:41:16.297179       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0709 16:41:42.597888       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	E0709 16:41:47.557418       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"csi-hostpathplugin-sa\" not found]"
	I0709 16:41:53.786217       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0709 16:41:53.786470       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0709 16:41:53.860183       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0709 16:41:53.860249       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0709 16:41:53.914984       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0709 16:41:53.915230       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0709 16:41:53.932072       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0709 16:41:53.932123       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0709 16:41:54.863209       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0709 16:41:54.932755       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0709 16:41:54.975967       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [62cb6767f140] <==
	E0709 16:48:34.157046       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0709 16:48:35.191387       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0709 16:48:35.191458       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0709 16:48:38.553383       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0709 16:48:38.553437       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0709 16:48:39.027842       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0709 16:48:39.028158       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0709 16:48:42.554721       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0709 16:48:42.554773       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0709 16:48:57.317045       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0709 16:48:57.317094       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0709 16:48:58.032862       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0709 16:48:58.032895       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0709 16:49:04.339798       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0709 16:49:04.339878       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0709 16:49:05.197527       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0709 16:49:05.197692       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0709 16:49:10.782018       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0709 16:49:10.782457       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0709 16:49:12.742103       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0709 16:49:12.742337       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0709 16:49:14.523344       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0709 16:49:14.523380       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0709 16:49:14.712064       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0709 16:49:14.712122       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [56c89c638fd0] <==
	I0709 16:37:47.052365       1 server_linux.go:69] "Using iptables proxy"
	I0709 16:37:47.071637       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.216"]
	I0709 16:37:47.196428       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0709 16:37:47.196508       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0709 16:37:47.196525       1 server_linux.go:165] "Using iptables Proxier"
	I0709 16:37:47.218781       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0709 16:37:47.219045       1 server.go:872] "Version info" version="v1.30.2"
	I0709 16:37:47.219060       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0709 16:37:47.228175       1 config.go:192] "Starting service config controller"
	I0709 16:37:47.228211       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0709 16:37:47.228234       1 config.go:101] "Starting endpoint slice config controller"
	I0709 16:37:47.228238       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0709 16:37:47.233880       1 config.go:319] "Starting node config controller"
	I0709 16:37:47.233912       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0709 16:37:47.328373       1 shared_informer.go:320] Caches are synced for service config
	I0709 16:37:47.328460       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0709 16:37:47.335619       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [ace02a99e923] <==
	W0709 16:37:27.646738       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0709 16:37:27.647912       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0709 16:37:27.647213       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0709 16:37:27.647927       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0709 16:37:27.647047       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0709 16:37:27.648222       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0709 16:37:28.481016       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0709 16:37:28.481157       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0709 16:37:28.516745       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0709 16:37:28.516775       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0709 16:37:28.556414       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0709 16:37:28.556543       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0709 16:37:28.559205       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0709 16:37:28.559699       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0709 16:37:28.709058       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0709 16:37:28.709202       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0709 16:37:28.789983       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0709 16:37:28.790123       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0709 16:37:28.891414       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0709 16:37:28.891742       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0709 16:37:28.906807       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0709 16:37:28.906914       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0709 16:37:28.966490       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0709 16:37:28.966682       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0709 16:37:31.236100       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 09 16:44:30 addons-470383 kubelet[2021]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 09 16:44:30 addons-470383 kubelet[2021]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 09 16:45:30 addons-470383 kubelet[2021]: E0709 16:45:30.513519    2021 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 09 16:45:30 addons-470383 kubelet[2021]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 09 16:45:30 addons-470383 kubelet[2021]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 09 16:45:30 addons-470383 kubelet[2021]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 09 16:45:30 addons-470383 kubelet[2021]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 09 16:45:44 addons-470383 kubelet[2021]: I0709 16:45:44.492864    2021 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/coredns-7db6d8ff4d-zb54k" secret="" err="secret \"gcp-auth\" not found"
	Jul 09 16:46:30 addons-470383 kubelet[2021]: E0709 16:46:30.513513    2021 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 09 16:46:30 addons-470383 kubelet[2021]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 09 16:46:30 addons-470383 kubelet[2021]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 09 16:46:30 addons-470383 kubelet[2021]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 09 16:46:30 addons-470383 kubelet[2021]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 09 16:47:09 addons-470383 kubelet[2021]: I0709 16:47:09.492460    2021 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/coredns-7db6d8ff4d-zb54k" secret="" err="secret \"gcp-auth\" not found"
	Jul 09 16:47:30 addons-470383 kubelet[2021]: E0709 16:47:30.514918    2021 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 09 16:47:30 addons-470383 kubelet[2021]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 09 16:47:30 addons-470383 kubelet[2021]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 09 16:47:30 addons-470383 kubelet[2021]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 09 16:47:30 addons-470383 kubelet[2021]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 09 16:48:24 addons-470383 kubelet[2021]: I0709 16:48:24.492509    2021 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/coredns-7db6d8ff4d-zb54k" secret="" err="secret \"gcp-auth\" not found"
	Jul 09 16:48:30 addons-470383 kubelet[2021]: E0709 16:48:30.513621    2021 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 09 16:48:30 addons-470383 kubelet[2021]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 09 16:48:30 addons-470383 kubelet[2021]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 09 16:48:30 addons-470383 kubelet[2021]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 09 16:48:30 addons-470383 kubelet[2021]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [63791a503635] <==
	I0709 16:37:53.848161       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0709 16:37:53.909993       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0709 16:37:53.910072       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0709 16:37:53.944157       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0709 16:37:53.944277       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-470383_08964e23-e953-4bef-911f-a81db629659f!
	I0709 16:37:53.949782       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d2dfe659-28aa-46e9-99f1-3539a4dcc266", APIVersion:"v1", ResourceVersion:"662", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-470383_08964e23-e953-4bef-911f-a81db629659f became leader
	I0709 16:37:54.045141       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-470383_08964e23-e953-4bef-911f-a81db629659f!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-470383 -n addons-470383
helpers_test.go:261: (dbg) Run:  kubectl --context addons-470383 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-jcxj8 ingress-nginx-admission-patch-nvnwq
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-470383 describe pod ingress-nginx-admission-create-jcxj8 ingress-nginx-admission-patch-nvnwq
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-470383 describe pod ingress-nginx-admission-create-jcxj8 ingress-nginx-admission-patch-nvnwq: exit status 1 (60.715563ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-jcxj8" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-nvnwq" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-470383 describe pod ingress-nginx-admission-create-jcxj8 ingress-nginx-admission-patch-nvnwq: exit status 1
--- FAIL: TestAddons/parallel/Ingress (483.07s)

                                                
                                    

Test pass (309/341)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 10.74
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.12
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.30.2/json-events 4.6
13 TestDownloadOnly/v1.30.2/preload-exists 0
17 TestDownloadOnly/v1.30.2/LogsDuration 0.06
18 TestDownloadOnly/v1.30.2/DeleteAll 0.13
19 TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.54
22 TestOffline 106.42
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.04
27 TestAddons/Setup 229.91
29 TestAddons/parallel/Registry 16.19
31 TestAddons/parallel/InspektorGadget 12.31
32 TestAddons/parallel/MetricsServer 6.73
33 TestAddons/parallel/HelmTiller 13.05
35 TestAddons/parallel/CSI 64.2
36 TestAddons/parallel/Headlamp 13.93
37 TestAddons/parallel/CloudSpanner 5.46
38 TestAddons/parallel/LocalPath 54.34
39 TestAddons/parallel/NvidiaDevicePlugin 6.88
40 TestAddons/parallel/Yakd 5.01
41 TestAddons/parallel/Volcano 43.7
44 TestAddons/serial/GCPAuth/Namespaces 0.14
45 TestAddons/StoppedEnableDisable 13.55
46 TestCertOptions 84.41
47 TestCertExpiration 326.76
48 TestDockerFlags 92.54
49 TestForceSystemdFlag 55.02
50 TestForceSystemdEnv 104.53
52 TestKVMDriverInstallOrUpdate 3.73
56 TestErrorSpam/setup 51.76
57 TestErrorSpam/start 0.32
58 TestErrorSpam/status 0.72
59 TestErrorSpam/pause 1.18
60 TestErrorSpam/unpause 1.24
61 TestErrorSpam/stop 16.07
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 66.82
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 44.02
68 TestFunctional/serial/KubeContext 0.05
69 TestFunctional/serial/KubectlGetPods 0.08
72 TestFunctional/serial/CacheCmd/cache/add_remote 2.38
73 TestFunctional/serial/CacheCmd/cache/add_local 1.25
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
75 TestFunctional/serial/CacheCmd/cache/list 0.04
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.18
78 TestFunctional/serial/CacheCmd/cache/delete 0.09
79 TestFunctional/serial/MinikubeKubectlCmd 0.1
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.09
81 TestFunctional/serial/ExtraConfig 42.68
82 TestFunctional/serial/ComponentHealth 0.06
83 TestFunctional/serial/LogsCmd 1.08
84 TestFunctional/serial/LogsFileCmd 1.1
85 TestFunctional/serial/InvalidService 4.11
87 TestFunctional/parallel/ConfigCmd 0.33
88 TestFunctional/parallel/DashboardCmd 32.65
89 TestFunctional/parallel/DryRun 0.27
90 TestFunctional/parallel/InternationalLanguage 0.15
91 TestFunctional/parallel/StatusCmd 1.04
95 TestFunctional/parallel/ServiceCmdConnect 11.59
96 TestFunctional/parallel/AddonsCmd 0.12
97 TestFunctional/parallel/PersistentVolumeClaim 51.27
99 TestFunctional/parallel/SSHCmd 0.45
100 TestFunctional/parallel/CpCmd 1.53
101 TestFunctional/parallel/MySQL 32.12
102 TestFunctional/parallel/FileSync 0.25
103 TestFunctional/parallel/CertSync 1.6
107 TestFunctional/parallel/NodeLabels 0.07
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.22
111 TestFunctional/parallel/License 0.21
112 TestFunctional/parallel/DockerEnv/bash 0.88
122 TestFunctional/parallel/ServiceCmd/DeployApp 12.22
123 TestFunctional/parallel/ProfileCmd/profile_not_create 0.3
124 TestFunctional/parallel/ProfileCmd/profile_list 0.28
125 TestFunctional/parallel/ProfileCmd/profile_json_output 0.28
126 TestFunctional/parallel/MountCmd/any-port 8.54
127 TestFunctional/parallel/MountCmd/specific-port 1.57
128 TestFunctional/parallel/MountCmd/VerifyCleanup 1.37
129 TestFunctional/parallel/ServiceCmd/List 0.33
130 TestFunctional/parallel/ServiceCmd/JSONOutput 0.31
131 TestFunctional/parallel/ServiceCmd/HTTPS 0.36
132 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
133 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
134 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
135 TestFunctional/parallel/ServiceCmd/Format 0.37
136 TestFunctional/parallel/ServiceCmd/URL 0.48
137 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
138 TestFunctional/parallel/ImageCommands/ImageListTable 0.21
139 TestFunctional/parallel/ImageCommands/ImageListJson 0.21
140 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
141 TestFunctional/parallel/ImageCommands/ImageBuild 3.81
142 TestFunctional/parallel/ImageCommands/Setup 1.36
143 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.8
144 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.74
145 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.88
146 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.65
147 TestFunctional/parallel/ImageCommands/ImageRemove 0.46
148 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.59
149 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.93
150 TestFunctional/parallel/Version/short 0.05
151 TestFunctional/parallel/Version/components 0.59
152 TestFunctional/delete_addon-resizer_images 0.07
153 TestFunctional/delete_my-image_image 0.02
154 TestFunctional/delete_minikube_cached_images 0.02
155 TestGvisorAddon 233.1
158 TestMultiControlPlane/serial/StartCluster 216.11
159 TestMultiControlPlane/serial/DeployApp 6.31
160 TestMultiControlPlane/serial/PingHostFromPods 1.24
161 TestMultiControlPlane/serial/AddWorkerNode 50.14
162 TestMultiControlPlane/serial/NodeLabels 0.07
163 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.56
164 TestMultiControlPlane/serial/CopyFile 13.02
165 TestMultiControlPlane/serial/StopSecondaryNode 13.3
166 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.4
167 TestMultiControlPlane/serial/RestartSecondaryNode 28.88
168 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.56
169 TestMultiControlPlane/serial/RestartClusterKeepsNodes 202.24
170 TestMultiControlPlane/serial/DeleteSecondaryNode 8.24
171 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.37
172 TestMultiControlPlane/serial/StopCluster 38.35
173 TestMultiControlPlane/serial/RestartCluster 170.91
174 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.36
175 TestMultiControlPlane/serial/AddSecondaryNode 79.95
176 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.53
179 TestImageBuild/serial/Setup 49.78
180 TestImageBuild/serial/NormalBuild 1.55
181 TestImageBuild/serial/BuildWithBuildArg 0.94
182 TestImageBuild/serial/BuildWithDockerIgnore 0.37
183 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.3
187 TestJSONOutput/start/Command 64.41
188 TestJSONOutput/start/Audit 0
190 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/pause/Command 0.59
194 TestJSONOutput/pause/Audit 0
196 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/unpause/Command 0.57
200 TestJSONOutput/unpause/Audit 0
202 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
205 TestJSONOutput/stop/Command 13.34
206 TestJSONOutput/stop/Audit 0
208 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
209 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
210 TestErrorJSONOutput 0.18
215 TestMainNoArgs 0.04
216 TestMinikubeProfile 100.52
219 TestMountStart/serial/StartWithMountFirst 30.32
220 TestMountStart/serial/VerifyMountFirst 0.36
221 TestMountStart/serial/StartWithMountSecond 27.49
222 TestMountStart/serial/VerifyMountSecond 0.36
223 TestMountStart/serial/DeleteFirst 0.72
224 TestMountStart/serial/VerifyMountPostDelete 0.37
225 TestMountStart/serial/Stop 2.27
226 TestMountStart/serial/RestartStopped 26.23
227 TestMountStart/serial/VerifyMountPostStop 0.37
230 TestMultiNode/serial/FreshStart2Nodes 118.1
231 TestMultiNode/serial/DeployApp2Nodes 4.01
232 TestMultiNode/serial/PingHostFrom2Pods 0.82
233 TestMultiNode/serial/AddNode 48.48
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.2
236 TestMultiNode/serial/CopyFile 6.95
237 TestMultiNode/serial/StopNode 3.4
238 TestMultiNode/serial/StartAfterStop 31.93
239 TestMultiNode/serial/RestartKeepsNodes 235.4
240 TestMultiNode/serial/DeleteNode 2.41
241 TestMultiNode/serial/StopMultiNode 25.17
242 TestMultiNode/serial/RestartMultiNode 88.29
243 TestMultiNode/serial/ValidateNameConflict 52.2
248 TestPreload 316.64
250 TestScheduledStopUnix 119.8
251 TestSkaffold 148.23
254 TestRunningBinaryUpgrade 174.15
256 TestKubernetesUpgrade 162.75
269 TestStoppedBinaryUpgrade/Setup 0.34
270 TestStoppedBinaryUpgrade/Upgrade 253.14
272 TestPause/serial/Start 105.98
281 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
282 TestNoKubernetes/serial/StartWithK8s 78.28
283 TestNetworkPlugins/group/auto/Start 141.83
284 TestNoKubernetes/serial/StartWithStopK8s 35.1
285 TestPause/serial/SecondStartNoReconfiguration 51.5
286 TestNoKubernetes/serial/Start 31.79
287 TestStoppedBinaryUpgrade/MinikubeLogs 1.05
288 TestNetworkPlugins/group/kindnet/Start 89.15
289 TestNoKubernetes/serial/VerifyK8sNotRunning 0.2
290 TestNoKubernetes/serial/ProfileList 1.22
291 TestNoKubernetes/serial/Stop 2.43
292 TestNoKubernetes/serial/StartNoArgs 44.62
293 TestPause/serial/Pause 0.58
294 TestPause/serial/VerifyStatus 0.26
295 TestPause/serial/Unpause 0.55
296 TestPause/serial/PauseAgain 0.72
297 TestPause/serial/DeletePaused 1.04
298 TestPause/serial/VerifyDeletedResources 0.27
299 TestNetworkPlugins/group/calico/Start 127.86
300 TestNetworkPlugins/group/auto/KubeletFlags 0.19
301 TestNetworkPlugins/group/auto/NetCatPod 10.23
302 TestNetworkPlugins/group/auto/DNS 0.19
303 TestNetworkPlugins/group/auto/Localhost 0.16
304 TestNetworkPlugins/group/auto/HairPin 0.14
305 TestNetworkPlugins/group/custom-flannel/Start 113.14
306 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.19
307 TestNetworkPlugins/group/false/Start 123.09
308 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
309 TestNetworkPlugins/group/kindnet/KubeletFlags 0.21
310 TestNetworkPlugins/group/kindnet/NetCatPod 13.19
311 TestNetworkPlugins/group/kindnet/DNS 0.17
312 TestNetworkPlugins/group/kindnet/Localhost 0.12
313 TestNetworkPlugins/group/kindnet/HairPin 0.14
314 TestNetworkPlugins/group/enable-default-cni/Start 127.5
315 TestNetworkPlugins/group/calico/ControllerPod 6.01
316 TestNetworkPlugins/group/calico/KubeletFlags 0.24
317 TestNetworkPlugins/group/calico/NetCatPod 11.28
318 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.2
319 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.23
320 TestNetworkPlugins/group/calico/DNS 0.19
321 TestNetworkPlugins/group/calico/Localhost 0.17
322 TestNetworkPlugins/group/calico/HairPin 0.19
323 TestNetworkPlugins/group/custom-flannel/DNS 0.3
324 TestNetworkPlugins/group/custom-flannel/Localhost 0.25
325 TestNetworkPlugins/group/custom-flannel/HairPin 0.21
326 TestNetworkPlugins/group/false/KubeletFlags 0.25
327 TestNetworkPlugins/group/false/NetCatPod 12.39
328 TestNetworkPlugins/group/flannel/Start 80.42
329 TestNetworkPlugins/group/bridge/Start 94.89
330 TestNetworkPlugins/group/false/DNS 0.19
331 TestNetworkPlugins/group/false/Localhost 0.16
332 TestNetworkPlugins/group/false/HairPin 0.15
333 TestNetworkPlugins/group/kubenet/Start 105.75
334 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.23
335 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.24
336 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
337 TestNetworkPlugins/group/enable-default-cni/Localhost 0.19
338 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
339 TestNetworkPlugins/group/flannel/ControllerPod 6.01
341 TestStartStop/group/old-k8s-version/serial/FirstStart 147.44
342 TestNetworkPlugins/group/flannel/KubeletFlags 0.33
343 TestNetworkPlugins/group/flannel/NetCatPod 12.32
344 TestNetworkPlugins/group/flannel/DNS 0.2
345 TestNetworkPlugins/group/flannel/Localhost 0.17
346 TestNetworkPlugins/group/flannel/HairPin 0.16
347 TestNetworkPlugins/group/bridge/KubeletFlags 0.21
348 TestNetworkPlugins/group/bridge/NetCatPod 12.24
349 TestNetworkPlugins/group/bridge/DNS 0.2
350 TestNetworkPlugins/group/bridge/Localhost 0.16
351 TestNetworkPlugins/group/bridge/HairPin 0.17
353 TestStartStop/group/no-preload/serial/FirstStart 119.84
354 TestNetworkPlugins/group/kubenet/KubeletFlags 0.39
355 TestNetworkPlugins/group/kubenet/NetCatPod 10.21
357 TestStartStop/group/embed-certs/serial/FirstStart 119.83
358 TestNetworkPlugins/group/kubenet/DNS 0.15
359 TestNetworkPlugins/group/kubenet/Localhost 0.15
360 TestNetworkPlugins/group/kubenet/HairPin 0.14
362 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 125.06
363 TestStartStop/group/old-k8s-version/serial/DeployApp 8.48
364 TestStartStop/group/no-preload/serial/DeployApp 8.42
365 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.97
366 TestStartStop/group/old-k8s-version/serial/Stop 13.34
367 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.13
368 TestStartStop/group/no-preload/serial/Stop 13.4
369 TestStartStop/group/embed-certs/serial/DeployApp 9.28
370 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
371 TestStartStop/group/old-k8s-version/serial/SecondStart 400.28
372 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
373 TestStartStop/group/no-preload/serial/SecondStart 330.1
374 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.94
375 TestStartStop/group/embed-certs/serial/Stop 13.36
376 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.17
377 TestStartStop/group/embed-certs/serial/SecondStart 347.77
378 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.33
379 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.01
380 TestStartStop/group/default-k8s-diff-port/serial/Stop 13.32
381 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
382 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 325.49
383 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 13.01
384 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
385 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.21
386 TestStartStop/group/no-preload/serial/Pause 2.64
388 TestStartStop/group/newest-cni/serial/FirstStart 71.29
389 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
390 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
391 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 11.03
392 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.26
393 TestStartStop/group/embed-certs/serial/Pause 2.76
394 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.09
395 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.21
396 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.53
397 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
398 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
399 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.2
400 TestStartStop/group/old-k8s-version/serial/Pause 2.36
401 TestStartStop/group/newest-cni/serial/DeployApp 0
402 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.81
403 TestStartStop/group/newest-cni/serial/Stop 13.31
404 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
405 TestStartStop/group/newest-cni/serial/SecondStart 36.91
406 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
407 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
408 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.19
409 TestStartStop/group/newest-cni/serial/Pause 2.11
x
+
TestDownloadOnly/v1.20.0/json-events (10.74s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-894250 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-894250 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=kvm2 : (10.741572716s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (10.74s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-894250
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-894250: exit status 85 (54.980767ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-894250 | jenkins | v1.33.1 | 09 Jul 24 16:36 UTC |          |
	|         | -p download-only-894250        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/09 16:36:25
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0709 16:36:25.269208   14714 out.go:291] Setting OutFile to fd 1 ...
	I0709 16:36:25.269339   14714 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0709 16:36:25.269349   14714 out.go:304] Setting ErrFile to fd 2...
	I0709 16:36:25.269353   14714 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0709 16:36:25.269535   14714 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19199-7540/.minikube/bin
	W0709 16:36:25.269663   14714 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19199-7540/.minikube/config/config.json: open /home/jenkins/minikube-integration/19199-7540/.minikube/config/config.json: no such file or directory
	I0709 16:36:25.270271   14714 out.go:298] Setting JSON to true
	I0709 16:36:25.271164   14714 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":1126,"bootTime":1720541859,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0709 16:36:25.271219   14714 start.go:139] virtualization: kvm guest
	I0709 16:36:25.273814   14714 out.go:97] [download-only-894250] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0709 16:36:25.273948   14714 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/19199-7540/.minikube/cache/preloaded-tarball: no such file or directory
	I0709 16:36:25.273985   14714 notify.go:220] Checking for updates...
	I0709 16:36:25.275421   14714 out.go:169] MINIKUBE_LOCATION=19199
	I0709 16:36:25.276789   14714 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0709 16:36:25.278159   14714 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19199-7540/kubeconfig
	I0709 16:36:25.279627   14714 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19199-7540/.minikube
	I0709 16:36:25.281090   14714 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0709 16:36:25.283575   14714 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0709 16:36:25.283818   14714 driver.go:392] Setting default libvirt URI to qemu:///system
	I0709 16:36:25.382008   14714 out.go:97] Using the kvm2 driver based on user configuration
	I0709 16:36:25.382050   14714 start.go:297] selected driver: kvm2
	I0709 16:36:25.382061   14714 start.go:901] validating driver "kvm2" against <nil>
	I0709 16:36:25.382385   14714 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0709 16:36:25.382520   14714 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19199-7540/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0709 16:36:25.396257   14714 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0709 16:36:25.396298   14714 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0709 16:36:25.396795   14714 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0709 16:36:25.396934   14714 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0709 16:36:25.396987   14714 cni.go:84] Creating CNI manager for ""
	I0709 16:36:25.397002   14714 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0709 16:36:25.397053   14714 start.go:340] cluster config:
	{Name:download-only-894250 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720534588-19199@sha256:b4b7a193d4d5ddc3a5becbbd3489eb6d587f98b5654dfee6a583e3346dfa913d Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-894250 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0709 16:36:25.397231   14714 iso.go:125] acquiring lock: {Name:mk93d6a6f33561e26ce93d6660cdedc1d654228a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0709 16:36:25.399177   14714 out.go:97] Downloading VM boot image ...
	I0709 16:36:25.399215   14714 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19199/minikube-v1.33.1-1720433170-19199-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19199/minikube-v1.33.1-1720433170-19199-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19199-7540/.minikube/cache/iso/amd64/minikube-v1.33.1-1720433170-19199-amd64.iso
	I0709 16:36:29.151890   14714 out.go:97] Starting "download-only-894250" primary control-plane node in "download-only-894250" cluster
	I0709 16:36:29.151914   14714 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0709 16:36:29.177745   14714 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0709 16:36:29.177803   14714 cache.go:56] Caching tarball of preloaded images
	I0709 16:36:29.177959   14714 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0709 16:36:29.179985   14714 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0709 16:36:29.180012   14714 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0709 16:36:29.209641   14714 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /home/jenkins/minikube-integration/19199-7540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0709 16:36:33.550242   14714 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0709 16:36:33.550340   14714 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/19199-7540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0709 16:36:34.424678   14714 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0709 16:36:34.425045   14714 profile.go:143] Saving config to /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/download-only-894250/config.json ...
	I0709 16:36:34.425076   14714 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/download-only-894250/config.json: {Name:mkdebc85fcd10dcc1b1266a7ff934c42c1c418fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 16:36:34.425248   14714 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0709 16:36:34.425447   14714 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19199-7540/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-894250 host does not exist
	  To start a cluster, run: "minikube start -p download-only-894250"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-894250
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/json-events (4.6s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-574622 --force --alsologtostderr --kubernetes-version=v1.30.2 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-574622 --force --alsologtostderr --kubernetes-version=v1.30.2 --container-runtime=docker --driver=kvm2 : (4.602410635s)
--- PASS: TestDownloadOnly/v1.30.2/json-events (4.60s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/preload-exists
--- PASS: TestDownloadOnly/v1.30.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-574622
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-574622: exit status 85 (55.497442ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-894250 | jenkins | v1.33.1 | 09 Jul 24 16:36 UTC |                     |
	|         | -p download-only-894250        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 09 Jul 24 16:36 UTC | 09 Jul 24 16:36 UTC |
	| delete  | -p download-only-894250        | download-only-894250 | jenkins | v1.33.1 | 09 Jul 24 16:36 UTC | 09 Jul 24 16:36 UTC |
	| start   | -o=json --download-only        | download-only-574622 | jenkins | v1.33.1 | 09 Jul 24 16:36 UTC |                     |
	|         | -p download-only-574622        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/09 16:36:36
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0709 16:36:36.309667   14924 out.go:291] Setting OutFile to fd 1 ...
	I0709 16:36:36.309812   14924 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0709 16:36:36.309822   14924 out.go:304] Setting ErrFile to fd 2...
	I0709 16:36:36.309828   14924 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0709 16:36:36.309999   14924 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19199-7540/.minikube/bin
	I0709 16:36:36.310542   14924 out.go:298] Setting JSON to true
	I0709 16:36:36.311365   14924 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":1137,"bootTime":1720541859,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0709 16:36:36.311427   14924 start.go:139] virtualization: kvm guest
	I0709 16:36:36.313645   14924 out.go:97] [download-only-574622] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0709 16:36:36.313782   14924 notify.go:220] Checking for updates...
	I0709 16:36:36.315095   14924 out.go:169] MINIKUBE_LOCATION=19199
	I0709 16:36:36.316481   14924 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0709 16:36:36.317825   14924 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19199-7540/kubeconfig
	I0709 16:36:36.319074   14924 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19199-7540/.minikube
	I0709 16:36:36.320346   14924 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-574622 host does not exist
	  To start a cluster, run: "minikube start -p download-only-574622"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.2/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.2/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-574622
--- PASS: TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.54s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-675753 --alsologtostderr --binary-mirror http://127.0.0.1:44415 --driver=kvm2 
helpers_test.go:175: Cleaning up "binary-mirror-675753" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-675753
--- PASS: TestBinaryMirror (0.54s)

                                                
                                    
x
+
TestOffline (106.42s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-300005 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-300005 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 : (1m45.278646534s)
helpers_test.go:175: Cleaning up "offline-docker-300005" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-300005
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-300005: (1.145941012s)
--- PASS: TestOffline (106.42s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-470383
addons_test.go:1029: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-470383: exit status 85 (45.121655ms)

                                                
                                                
-- stdout --
	* Profile "addons-470383" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-470383"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.04s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1040: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-470383
addons_test.go:1040: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-470383: exit status 85 (44.149295ms)

                                                
                                                
-- stdout --
	* Profile "addons-470383" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-470383"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.04s)

                                                
                                    
x
+
TestAddons/Setup (229.91s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-470383 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-470383 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m49.911002573s)
--- PASS: TestAddons/Setup (229.91s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.19s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 17.818729ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-qw6rf" [2d4c0856-2f3e-42c9-96eb-940302a22b52] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.011413078s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-pqgzx" [f8b3ce45-0b02-4af7-8df1-31575bfe65f4] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.004447105s
addons_test.go:342: (dbg) Run:  kubectl --context addons-470383 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-470383 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-470383 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.488723289s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-470383 ip
2024/07/09 16:40:47 [DEBUG] GET http://192.168.39.216:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-470383 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.19s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.31s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-fns5j" [cd4e2aab-820a-42d3-ac80-47c98224ecf1] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004031763s
addons_test.go:843: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-470383
addons_test.go:843: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-470383: (6.306295584s)
--- PASS: TestAddons/parallel/InspektorGadget (12.31s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.73s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.73834ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-2s6bg" [b14f1175-baf5-4ce1-8f58-e7e12b70e88a] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004650688s
addons_test.go:417: (dbg) Run:  kubectl --context addons-470383 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-470383 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.73s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (13.05s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 17.557109ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-cgq7t" [22a78248-d085-4876-9a88-009128ffc0f0] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.017218798s
addons_test.go:475: (dbg) Run:  kubectl --context addons-470383 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-470383 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (6.479758795s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-470383 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (13.05s)

                                                
                                    
x
+
TestAddons/parallel/CSI (64.2s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:563: csi-hostpath-driver pods stabilized in 4.630994ms
addons_test.go:566: (dbg) Run:  kubectl --context addons-470383 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:571: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-470383 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-470383 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-470383 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-470383 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-470383 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-470383 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-470383 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-470383 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-470383 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-470383 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-470383 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-470383 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-470383 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-470383 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-470383 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-470383 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-470383 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-470383 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-470383 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:576: (dbg) Run:  kubectl --context addons-470383 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:581: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [b1802992-2724-4b43-98e5-39e8a0470bb9] Pending
helpers_test.go:344: "task-pv-pod" [b1802992-2724-4b43-98e5-39e8a0470bb9] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [b1802992-2724-4b43-98e5-39e8a0470bb9] Running
addons_test.go:581: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.004175966s
addons_test.go:586: (dbg) Run:  kubectl --context addons-470383 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:591: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-470383 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-470383 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:596: (dbg) Run:  kubectl --context addons-470383 delete pod task-pv-pod
addons_test.go:602: (dbg) Run:  kubectl --context addons-470383 delete pvc hpvc
addons_test.go:608: (dbg) Run:  kubectl --context addons-470383 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:613: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-470383 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-470383 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-470383 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-470383 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-470383 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-470383 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-470383 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-470383 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-470383 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-470383 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-470383 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-470383 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-470383 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-470383 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-470383 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-470383 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-470383 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-470383 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-470383 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-470383 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-470383 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:618: (dbg) Run:  kubectl --context addons-470383 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:623: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [84eac73a-79ae-402f-901a-3a00d5c8b575] Pending
helpers_test.go:344: "task-pv-pod-restore" [84eac73a-79ae-402f-901a-3a00d5c8b575] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [84eac73a-79ae-402f-901a-3a00d5c8b575] Running
addons_test.go:623: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003859957s
addons_test.go:628: (dbg) Run:  kubectl --context addons-470383 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Run:  kubectl --context addons-470383 delete pvc hpvc-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-470383 delete volumesnapshot new-snapshot-demo
addons_test.go:640: (dbg) Run:  out/minikube-linux-amd64 -p addons-470383 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:640: (dbg) Done: out/minikube-linux-amd64 -p addons-470383 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.709177313s)
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-470383 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (64.20s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (13.93s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:826: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-470383 --alsologtostderr -v=1
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7867546754-fwm7f" [35e61788-e057-478e-927f-535a4a25a35f] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-fwm7f" [35e61788-e057-478e-927f-535a4a25a35f] Running
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.004686793s
--- PASS: TestAddons/parallel/Headlamp (13.93s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.46s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-v8xpn" [df9e7d71-5e8f-4d32-b39e-375481d09547] Running
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003645829s
addons_test.go:862: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-470383
--- PASS: TestAddons/parallel/CloudSpanner (5.46s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (54.34s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:974: (dbg) Run:  kubectl --context addons-470383 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:980: (dbg) Run:  kubectl --context addons-470383 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:984: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-470383 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-470383 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-470383 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-470383 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-470383 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-470383 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-470383 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [74394350-c559-4142-b8bf-2def1d164663] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [74394350-c559-4142-b8bf-2def1d164663] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [74394350-c559-4142-b8bf-2def1d164663] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.00385151s
addons_test.go:992: (dbg) Run:  kubectl --context addons-470383 get pvc test-pvc -o=json
addons_test.go:1001: (dbg) Run:  out/minikube-linux-amd64 -p addons-470383 ssh "cat /opt/local-path-provisioner/pvc-35f374b4-91fa-495b-8ec2-70cb471acc67_default_test-pvc/file1"
addons_test.go:1013: (dbg) Run:  kubectl --context addons-470383 delete pod test-local-path
addons_test.go:1017: (dbg) Run:  kubectl --context addons-470383 delete pvc test-pvc
addons_test.go:1021: (dbg) Run:  out/minikube-linux-amd64 -p addons-470383 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1021: (dbg) Done: out/minikube-linux-amd64 -p addons-470383 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.535656976s)
--- PASS: TestAddons/parallel/LocalPath (54.34s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.88s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-fbldv" [000921d8-448e-426c-9e4b-7d8c82757189] Running
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.005414595s
addons_test.go:1056: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-470383
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.88s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-wmgc5" [abfa95b1-d575-4551-bf55-13febe95ccfe] Running
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004936846s
--- PASS: TestAddons/parallel/Yakd (5.01s)

                                                
                                    
x
+
TestAddons/parallel/Volcano (43.7s)

                                                
                                                
=== RUN   TestAddons/parallel/Volcano
=== PAUSE TestAddons/parallel/Volcano

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Volcano
addons_test.go:905: volcano-controller stabilized in 3.199078ms
addons_test.go:889: volcano-scheduler stabilized in 3.722585ms
addons_test.go:897: volcano-admission stabilized in 4.566347ms
addons_test.go:911: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-844f6db89b-frbgk" [4af734c8-cf7f-4b03-a431-3270e94319ac] Running
addons_test.go:911: (dbg) TestAddons/parallel/Volcano: app=volcano-scheduler healthy within 5.00472248s
addons_test.go:915: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5f7844f7bc-m69km" [2f6ff3ab-bc72-42b2-8ef0-fdc345ad9a16] Running
addons_test.go:915: (dbg) TestAddons/parallel/Volcano: app=volcano-admission healthy within 5.004692369s
addons_test.go:919: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-59cb4746db-jtdkg" [f4914cb1-fe0f-4133-b28e-719c839a9bf2] Running
addons_test.go:919: (dbg) TestAddons/parallel/Volcano: app=volcano-controller healthy within 5.005655585s
addons_test.go:924: (dbg) Run:  kubectl --context addons-470383 delete -n volcano-system job volcano-admission-init
addons_test.go:930: (dbg) Run:  kubectl --context addons-470383 create -f testdata/vcjob.yaml
addons_test.go:938: (dbg) Run:  kubectl --context addons-470383 get vcjob -n my-volcano
addons_test.go:956: (dbg) TestAddons/parallel/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [70cad553-6a3e-4a3b-be90-496b0e48859b] Pending
helpers_test.go:344: "test-job-nginx-0" [70cad553-6a3e-4a3b-be90-496b0e48859b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [70cad553-6a3e-4a3b-be90-496b0e48859b] Running
addons_test.go:956: (dbg) TestAddons/parallel/Volcano: volcano.sh/job-name=test-job healthy within 18.004573204s
addons_test.go:960: (dbg) Run:  out/minikube-linux-amd64 -p addons-470383 addons disable volcano --alsologtostderr -v=1
addons_test.go:960: (dbg) Done: out/minikube-linux-amd64 -p addons-470383 addons disable volcano --alsologtostderr -v=1: (10.344665155s)
--- PASS: TestAddons/parallel/Volcano (43.70s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:652: (dbg) Run:  kubectl --context addons-470383 create ns new-namespace
addons_test.go:666: (dbg) Run:  kubectl --context addons-470383 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (13.55s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-470383
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-470383: (13.289437118s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-470383
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-470383
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-470383
--- PASS: TestAddons/StoppedEnableDisable (13.55s)

                                                
                                    
x
+
TestCertOptions (84.41s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-908870 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 
E0709 17:38:44.779184   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/functional-192943/client.crt: no such file or directory
E0709 17:38:47.153670   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/skaffold-323186/client.crt: no such file or directory
E0709 17:38:47.158946   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/skaffold-323186/client.crt: no such file or directory
E0709 17:38:47.169238   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/skaffold-323186/client.crt: no such file or directory
E0709 17:38:47.189533   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/skaffold-323186/client.crt: no such file or directory
E0709 17:38:47.229920   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/skaffold-323186/client.crt: no such file or directory
E0709 17:38:47.310225   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/skaffold-323186/client.crt: no such file or directory
E0709 17:38:47.470645   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/skaffold-323186/client.crt: no such file or directory
E0709 17:38:47.791312   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/skaffold-323186/client.crt: no such file or directory
E0709 17:38:48.432235   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/skaffold-323186/client.crt: no such file or directory
E0709 17:38:49.712651   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/skaffold-323186/client.crt: no such file or directory
E0709 17:38:52.273351   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/skaffold-323186/client.crt: no such file or directory
E0709 17:38:57.394387   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/skaffold-323186/client.crt: no such file or directory
E0709 17:39:07.635163   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/skaffold-323186/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-908870 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 : (1m22.733241194s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-908870 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-908870 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-908870 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-908870" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-908870
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-908870: (1.088116665s)
--- PASS: TestCertOptions (84.41s)

                                                
                                    
x
+
TestCertExpiration (326.76s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-803703 --memory=2048 --cert-expiration=3m --driver=kvm2 
E0709 17:35:31.910786   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/addons-470383/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-803703 --memory=2048 --cert-expiration=3m --driver=kvm2 : (1m53.762500117s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-803703 --memory=2048 --cert-expiration=8760h --driver=kvm2 
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-803703 --memory=2048 --cert-expiration=8760h --driver=kvm2 : (31.997817272s)
helpers_test.go:175: Cleaning up "cert-expiration-803703" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-803703
--- PASS: TestCertExpiration (326.76s)

                                                
                                    
x
+
TestDockerFlags (92.54s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-187388 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-187388 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 : (1m30.992558568s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-187388 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-187388 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-187388" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-187388
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-187388: (1.067097913s)
--- PASS: TestDockerFlags (92.54s)

                                                
                                    
x
+
TestForceSystemdFlag (55.02s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-361775 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-361775 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 : (53.960959138s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-361775 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-361775" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-361775
--- PASS: TestForceSystemdFlag (55.02s)

                                                
                                    
x
+
TestForceSystemdEnv (104.53s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-085689 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-085689 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 : (1m43.266120298s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-085689 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-085689" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-085689
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-085689: (1.038446027s)
--- PASS: TestForceSystemdEnv (104.53s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.73s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (3.73s)

                                                
                                    
x
+
TestErrorSpam/setup (51.76s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-748437 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-748437 --driver=kvm2 
E0709 16:50:31.910714   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/addons-470383/client.crt: no such file or directory
E0709 16:50:31.916412   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/addons-470383/client.crt: no such file or directory
E0709 16:50:31.926690   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/addons-470383/client.crt: no such file or directory
E0709 16:50:31.946967   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/addons-470383/client.crt: no such file or directory
E0709 16:50:31.987265   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/addons-470383/client.crt: no such file or directory
E0709 16:50:32.067560   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/addons-470383/client.crt: no such file or directory
E0709 16:50:32.227960   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/addons-470383/client.crt: no such file or directory
E0709 16:50:32.548604   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/addons-470383/client.crt: no such file or directory
E0709 16:50:33.189534   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/addons-470383/client.crt: no such file or directory
E0709 16:50:34.470112   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/addons-470383/client.crt: no such file or directory
E0709 16:50:37.030697   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/addons-470383/client.crt: no such file or directory
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-748437 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-748437 --driver=kvm2 : (51.760671944s)
--- PASS: TestErrorSpam/setup (51.76s)

                                                
                                    
x
+
TestErrorSpam/start (0.32s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-748437 --log_dir /tmp/nospam-748437 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-748437 --log_dir /tmp/nospam-748437 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-748437 --log_dir /tmp/nospam-748437 start --dry-run
--- PASS: TestErrorSpam/start (0.32s)

                                                
                                    
x
+
TestErrorSpam/status (0.72s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-748437 --log_dir /tmp/nospam-748437 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-748437 --log_dir /tmp/nospam-748437 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-748437 --log_dir /tmp/nospam-748437 status
--- PASS: TestErrorSpam/status (0.72s)

                                                
                                    
x
+
TestErrorSpam/pause (1.18s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-748437 --log_dir /tmp/nospam-748437 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-748437 --log_dir /tmp/nospam-748437 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-748437 --log_dir /tmp/nospam-748437 pause
--- PASS: TestErrorSpam/pause (1.18s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.24s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-748437 --log_dir /tmp/nospam-748437 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-748437 --log_dir /tmp/nospam-748437 unpause
E0709 16:50:42.151510   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/addons-470383/client.crt: no such file or directory
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-748437 --log_dir /tmp/nospam-748437 unpause
--- PASS: TestErrorSpam/unpause (1.24s)

                                                
                                    
x
+
TestErrorSpam/stop (16.07s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-748437 --log_dir /tmp/nospam-748437 stop
E0709 16:50:52.392300   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/addons-470383/client.crt: no such file or directory
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-748437 --log_dir /tmp/nospam-748437 stop: (12.441650936s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-748437 --log_dir /tmp/nospam-748437 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-748437 --log_dir /tmp/nospam-748437 stop: (1.637986574s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-748437 --log_dir /tmp/nospam-748437 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-748437 --log_dir /tmp/nospam-748437 stop: (1.991469279s)
--- PASS: TestErrorSpam/stop (16.07s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/19199-7540/.minikube/files/etc/test/nested/copy/14701/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (66.82s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-192943 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 
E0709 16:51:12.872930   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/addons-470383/client.crt: no such file or directory
E0709 16:51:53.834082   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/addons-470383/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-192943 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 : (1m6.820356015s)
--- PASS: TestFunctional/serial/StartWithProxy (66.82s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (44.02s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-192943 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-192943 --alsologtostderr -v=8: (44.02160038s)
functional_test.go:659: soft start took 44.022210512s for "functional-192943" cluster.
--- PASS: TestFunctional/serial/SoftStart (44.02s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-192943 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.38s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.38s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-192943 /tmp/TestFunctionalserialCacheCmdcacheadd_local3628650957/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 cache add minikube-local-cache-test:functional-192943
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 cache delete minikube-local-cache-test:functional-192943
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-192943
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-192943 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (207.045695ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 kubectl -- --context functional-192943 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-192943 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (42.68s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-192943 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0709 16:53:15.754236   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/addons-470383/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-192943 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (42.681698219s)
functional_test.go:757: restart took 42.681807993s for "functional-192943" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (42.68s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-192943 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-192943 logs: (1.08172168s)
--- PASS: TestFunctional/serial/LogsCmd (1.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.1s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 logs --file /tmp/TestFunctionalserialLogsFileCmd4200488667/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-192943 logs --file /tmp/TestFunctionalserialLogsFileCmd4200488667/001/logs.txt: (1.101877602s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.10s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.11s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-192943 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-192943
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-192943: exit status 115 (273.010617ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.218:31495 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-192943 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.11s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-192943 config get cpus: exit status 14 (65.495613ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-192943 config get cpus: exit status 14 (54.408886ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (32.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-192943 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-192943 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 24317: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (32.65s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-192943 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-192943 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (140.756153ms)

                                                
                                                
-- stdout --
	* [functional-192943] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19199
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19199-7540/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19199-7540/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0709 16:53:57.769654   23752 out.go:291] Setting OutFile to fd 1 ...
	I0709 16:53:57.769801   23752 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0709 16:53:57.769811   23752 out.go:304] Setting ErrFile to fd 2...
	I0709 16:53:57.769817   23752 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0709 16:53:57.770021   23752 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19199-7540/.minikube/bin
	I0709 16:53:57.770580   23752 out.go:298] Setting JSON to false
	I0709 16:53:57.771582   23752 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":2179,"bootTime":1720541859,"procs":245,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0709 16:53:57.771640   23752 start.go:139] virtualization: kvm guest
	I0709 16:53:57.773745   23752 out.go:177] * [functional-192943] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0709 16:53:57.775258   23752 out.go:177]   - MINIKUBE_LOCATION=19199
	I0709 16:53:57.775329   23752 notify.go:220] Checking for updates...
	I0709 16:53:57.777910   23752 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0709 16:53:57.779138   23752 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19199-7540/kubeconfig
	I0709 16:53:57.780378   23752 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19199-7540/.minikube
	I0709 16:53:57.781573   23752 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0709 16:53:57.782628   23752 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0709 16:53:57.784157   23752 config.go:182] Loaded profile config "functional-192943": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 16:53:57.784593   23752 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0709 16:53:57.784655   23752 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0709 16:53:57.798961   23752 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35779
	I0709 16:53:57.799325   23752 main.go:141] libmachine: () Calling .GetVersion
	I0709 16:53:57.800696   23752 main.go:141] libmachine: Using API Version  1
	I0709 16:53:57.800731   23752 main.go:141] libmachine: () Calling .SetConfigRaw
	I0709 16:53:57.801177   23752 main.go:141] libmachine: () Calling .GetMachineName
	I0709 16:53:57.801376   23752 main.go:141] libmachine: (functional-192943) Calling .DriverName
	I0709 16:53:57.801645   23752 driver.go:392] Setting default libvirt URI to qemu:///system
	I0709 16:53:57.801965   23752 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0709 16:53:57.801996   23752 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0709 16:53:57.821539   23752 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40921
	I0709 16:53:57.821924   23752 main.go:141] libmachine: () Calling .GetVersion
	I0709 16:53:57.822281   23752 main.go:141] libmachine: Using API Version  1
	I0709 16:53:57.822298   23752 main.go:141] libmachine: () Calling .SetConfigRaw
	I0709 16:53:57.822608   23752 main.go:141] libmachine: () Calling .GetMachineName
	I0709 16:53:57.822762   23752 main.go:141] libmachine: (functional-192943) Calling .DriverName
	I0709 16:53:57.858021   23752 out.go:177] * Using the kvm2 driver based on existing profile
	I0709 16:53:57.859393   23752 start.go:297] selected driver: kvm2
	I0709 16:53:57.859408   23752 start.go:901] validating driver "kvm2" against &{Name:functional-192943 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19199/minikube-v1.33.1-1720433170-19199-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720534588-19199@sha256:b4b7a193d4d5ddc3a5becbbd3489eb6d587f98b5654dfee6a583e3346dfa913d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.2 ClusterName:functional-192943 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.218 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0709 16:53:57.859553   23752 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0709 16:53:57.861850   23752 out.go:177] 
	W0709 16:53:57.863152   23752 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0709 16:53:57.864280   23752 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-192943 --dry-run --alsologtostderr -v=1 --driver=kvm2 
--- PASS: TestFunctional/parallel/DryRun (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-192943 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-192943 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (153.871183ms)

                                                
                                                
-- stdout --
	* [functional-192943] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19199
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19199-7540/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19199-7540/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0709 16:53:57.619928   23709 out.go:291] Setting OutFile to fd 1 ...
	I0709 16:53:57.620041   23709 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0709 16:53:57.620049   23709 out.go:304] Setting ErrFile to fd 2...
	I0709 16:53:57.620053   23709 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0709 16:53:57.620415   23709 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19199-7540/.minikube/bin
	I0709 16:53:57.620964   23709 out.go:298] Setting JSON to false
	I0709 16:53:57.621923   23709 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":2179,"bootTime":1720541859,"procs":245,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0709 16:53:57.621980   23709 start.go:139] virtualization: kvm guest
	I0709 16:53:57.624249   23709 out.go:177] * [functional-192943] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0709 16:53:57.625693   23709 notify.go:220] Checking for updates...
	I0709 16:53:57.625712   23709 out.go:177]   - MINIKUBE_LOCATION=19199
	I0709 16:53:57.627149   23709 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0709 16:53:57.628442   23709 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19199-7540/kubeconfig
	I0709 16:53:57.629722   23709 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19199-7540/.minikube
	I0709 16:53:57.630905   23709 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0709 16:53:57.632165   23709 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0709 16:53:57.633826   23709 config.go:182] Loaded profile config "functional-192943": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 16:53:57.634213   23709 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0709 16:53:57.634284   23709 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0709 16:53:57.649207   23709 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34267
	I0709 16:53:57.649667   23709 main.go:141] libmachine: () Calling .GetVersion
	I0709 16:53:57.650384   23709 main.go:141] libmachine: Using API Version  1
	I0709 16:53:57.650416   23709 main.go:141] libmachine: () Calling .SetConfigRaw
	I0709 16:53:57.650811   23709 main.go:141] libmachine: () Calling .GetMachineName
	I0709 16:53:57.651021   23709 main.go:141] libmachine: (functional-192943) Calling .DriverName
	I0709 16:53:57.651332   23709 driver.go:392] Setting default libvirt URI to qemu:///system
	I0709 16:53:57.651766   23709 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0709 16:53:57.651817   23709 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0709 16:53:57.671385   23709 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42837
	I0709 16:53:57.671826   23709 main.go:141] libmachine: () Calling .GetVersion
	I0709 16:53:57.672307   23709 main.go:141] libmachine: Using API Version  1
	I0709 16:53:57.672330   23709 main.go:141] libmachine: () Calling .SetConfigRaw
	I0709 16:53:57.672677   23709 main.go:141] libmachine: () Calling .GetMachineName
	I0709 16:53:57.672883   23709 main.go:141] libmachine: (functional-192943) Calling .DriverName
	I0709 16:53:57.714834   23709 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0709 16:53:57.716250   23709 start.go:297] selected driver: kvm2
	I0709 16:53:57.716265   23709 start.go:901] validating driver "kvm2" against &{Name:functional-192943 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19199/minikube-v1.33.1-1720433170-19199-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720534588-19199@sha256:b4b7a193d4d5ddc3a5becbbd3489eb6d587f98b5654dfee6a583e3346dfa913d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.2 ClusterName:functional-192943 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.218 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0709 16:53:57.716380   23709 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0709 16:53:57.718918   23709 out.go:177] 
	W0709 16:53:57.720249   23709 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0709 16:53:57.721594   23709 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-192943 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-192943 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-8ts45" [24f397c4-60e9-405b-a37f-a224e89cf7c0] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-8ts45" [24f397c4-60e9-405b-a37f-a224e89cf7c0] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.004004608s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.218:32354
functional_test.go:1671: http://192.168.39.218:32354: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-8ts45

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.218:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.218:32354
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.59s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (51.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [43c16aeb-f6b7-498d-8055-254564f9f8a5] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003971783s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-192943 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-192943 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-192943 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-192943 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-192943 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [84b46639-585f-4336-8f2d-98b723e52558] Pending
helpers_test.go:344: "sp-pod" [84b46639-585f-4336-8f2d-98b723e52558] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [84b46639-585f-4336-8f2d-98b723e52558] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.004775193s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-192943 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-192943 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-192943 delete -f testdata/storage-provisioner/pod.yaml: (1.104597877s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-192943 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [5e611ce5-9e2e-4bc9-9fdd-c1a7b48aa346] Pending
helpers_test.go:344: "sp-pod" [5e611ce5-9e2e-4bc9-9fdd-c1a7b48aa346] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [5e611ce5-9e2e-4bc9-9fdd-c1a7b48aa346] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 27.004404938s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-192943 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (51.27s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 ssh -n functional-192943 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 cp functional-192943:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1514880900/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 ssh -n functional-192943 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 ssh -n functional-192943 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.53s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (32.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-192943 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-96rqq" [dedaa40a-91ac-4905-bc4e-b562d07dc1e2] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-96rqq" [dedaa40a-91ac-4905-bc4e-b562d07dc1e2] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 26.004433481s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-192943 exec mysql-64454c8b5c-96rqq -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-192943 exec mysql-64454c8b5c-96rqq -- mysql -ppassword -e "show databases;": exit status 1 (256.6725ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-192943 exec mysql-64454c8b5c-96rqq -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-192943 exec mysql-64454c8b5c-96rqq -- mysql -ppassword -e "show databases;": exit status 1 (221.421018ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-192943 exec mysql-64454c8b5c-96rqq -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-192943 exec mysql-64454c8b5c-96rqq -- mysql -ppassword -e "show databases;": exit status 1 (160.846596ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
2024/07/09 16:54:29 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1803: (dbg) Run:  kubectl --context functional-192943 exec mysql-64454c8b5c-96rqq -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (32.12s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/14701/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 ssh "sudo cat /etc/test/nested/copy/14701/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/14701.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 ssh "sudo cat /etc/ssl/certs/14701.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/14701.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 ssh "sudo cat /usr/share/ca-certificates/14701.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/147012.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 ssh "sudo cat /etc/ssl/certs/147012.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/147012.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 ssh "sudo cat /usr/share/ca-certificates/147012.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.60s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-192943 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-192943 ssh "sudo systemctl is-active crio": exit status 1 (216.520663ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-192943 docker-env) && out/minikube-linux-amd64 status -p functional-192943"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-192943 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (12.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-192943 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-192943 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-jtncf" [144958e8-5b32-49c8-a8fd-da804adba648] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-jtncf" [144958e8-5b32-49c8-a8fd-da804adba648] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 12.004519351s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (12.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "225.95273ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "52.533919ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "236.184224ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "44.633663ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-192943 /tmp/TestFunctionalparallelMountCmdany-port1840010081/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1720544026239944894" to /tmp/TestFunctionalparallelMountCmdany-port1840010081/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1720544026239944894" to /tmp/TestFunctionalparallelMountCmdany-port1840010081/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1720544026239944894" to /tmp/TestFunctionalparallelMountCmdany-port1840010081/001/test-1720544026239944894
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-192943 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (224.625266ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul  9 16:53 created-by-test
-rw-r--r-- 1 docker docker 24 Jul  9 16:53 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul  9 16:53 test-1720544026239944894
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 ssh cat /mount-9p/test-1720544026239944894
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-192943 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [059b3a5b-a59c-4de9-8efc-122573f99c00] Pending
helpers_test.go:344: "busybox-mount" [059b3a5b-a59c-4de9-8efc-122573f99c00] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [059b3a5b-a59c-4de9-8efc-122573f99c00] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [059b3a5b-a59c-4de9-8efc-122573f99c00] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.004255086s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-192943 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-192943 /tmp/TestFunctionalparallelMountCmdany-port1840010081/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.54s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-192943 /tmp/TestFunctionalparallelMountCmdspecific-port2929637485/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-192943 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (238.734479ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-192943 /tmp/TestFunctionalparallelMountCmdspecific-port2929637485/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-192943 ssh "sudo umount -f /mount-9p": exit status 1 (189.463017ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-192943 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-192943 /tmp/TestFunctionalparallelMountCmdspecific-port2929637485/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.57s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-192943 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1268286361/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-192943 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1268286361/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-192943 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1268286361/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-192943 ssh "findmnt -T" /mount1: exit status 1 (317.860113ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-192943 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-192943 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1268286361/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-192943 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1268286361/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-192943 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1268286361/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 service list -o json
functional_test.go:1490: Took "308.829117ms" to run "out/minikube-linux-amd64 -p functional-192943 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.218:30521
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.218:30521
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-192943 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.2
registry.k8s.io/kube-proxy:v1.30.2
registry.k8s.io/kube-controller-manager:v1.30.2
registry.k8s.io/kube-apiserver:v1.30.2
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-192943
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-192943
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-192943 image ls --format short --alsologtostderr:
I0709 16:54:22.159210   24939 out.go:291] Setting OutFile to fd 1 ...
I0709 16:54:22.159562   24939 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0709 16:54:22.159577   24939 out.go:304] Setting ErrFile to fd 2...
I0709 16:54:22.159586   24939 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0709 16:54:22.160089   24939 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19199-7540/.minikube/bin
I0709 16:54:22.160862   24939 config.go:182] Loaded profile config "functional-192943": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0709 16:54:22.161041   24939 config.go:182] Loaded profile config "functional-192943": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0709 16:54:22.161486   24939 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0709 16:54:22.161535   24939 main.go:141] libmachine: Launching plugin server for driver kvm2
I0709 16:54:22.177014   24939 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39839
I0709 16:54:22.177484   24939 main.go:141] libmachine: () Calling .GetVersion
I0709 16:54:22.178102   24939 main.go:141] libmachine: Using API Version  1
I0709 16:54:22.178143   24939 main.go:141] libmachine: () Calling .SetConfigRaw
I0709 16:54:22.178500   24939 main.go:141] libmachine: () Calling .GetMachineName
I0709 16:54:22.178751   24939 main.go:141] libmachine: (functional-192943) Calling .GetState
I0709 16:54:22.180649   24939 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0709 16:54:22.180684   24939 main.go:141] libmachine: Launching plugin server for driver kvm2
I0709 16:54:22.195523   24939 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39005
I0709 16:54:22.195859   24939 main.go:141] libmachine: () Calling .GetVersion
I0709 16:54:22.196329   24939 main.go:141] libmachine: Using API Version  1
I0709 16:54:22.196348   24939 main.go:141] libmachine: () Calling .SetConfigRaw
I0709 16:54:22.196660   24939 main.go:141] libmachine: () Calling .GetMachineName
I0709 16:54:22.196900   24939 main.go:141] libmachine: (functional-192943) Calling .DriverName
I0709 16:54:22.197116   24939 ssh_runner.go:195] Run: systemctl --version
I0709 16:54:22.197137   24939 main.go:141] libmachine: (functional-192943) Calling .GetSSHHostname
I0709 16:54:22.200216   24939 main.go:141] libmachine: (functional-192943) DBG | domain functional-192943 has defined MAC address 52:54:00:7e:d3:ab in network mk-functional-192943
I0709 16:54:22.200615   24939 main.go:141] libmachine: (functional-192943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d3:ab", ip: ""} in network mk-functional-192943: {Iface:virbr1 ExpiryTime:2024-07-09 17:51:13 +0000 UTC Type:0 Mac:52:54:00:7e:d3:ab Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:functional-192943 Clientid:01:52:54:00:7e:d3:ab}
I0709 16:54:22.200640   24939 main.go:141] libmachine: (functional-192943) DBG | domain functional-192943 has defined IP address 192.168.39.218 and MAC address 52:54:00:7e:d3:ab in network mk-functional-192943
I0709 16:54:22.200797   24939 main.go:141] libmachine: (functional-192943) Calling .GetSSHPort
I0709 16:54:22.200996   24939 main.go:141] libmachine: (functional-192943) Calling .GetSSHKeyPath
I0709 16:54:22.201150   24939 main.go:141] libmachine: (functional-192943) Calling .GetSSHUsername
I0709 16:54:22.201298   24939 sshutil.go:53] new ssh client: &{IP:192.168.39.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19199-7540/.minikube/machines/functional-192943/id_rsa Username:docker}
I0709 16:54:22.299357   24939 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0709 16:54:22.346168   24939 main.go:141] libmachine: Making call to close driver server
I0709 16:54:22.346184   24939 main.go:141] libmachine: (functional-192943) Calling .Close
I0709 16:54:22.346518   24939 main.go:141] libmachine: Successfully made call to close driver server
I0709 16:54:22.346546   24939 main.go:141] libmachine: Making call to close connection to plugin binary
I0709 16:54:22.346556   24939 main.go:141] libmachine: Making call to close driver server
I0709 16:54:22.346565   24939 main.go:141] libmachine: (functional-192943) Calling .Close
I0709 16:54:22.346799   24939 main.go:141] libmachine: Successfully made call to close driver server
I0709 16:54:22.346813   24939 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-192943 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-controller-manager     | v1.30.2           | e874818b3caac | 111MB  |
| registry.k8s.io/coredns/coredns             | v1.11.1           | cbb01a7bd410d | 59.8MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/minikube-local-cache-test | functional-192943 | f14f79c2f03a9 | 30B    |
| registry.k8s.io/kube-apiserver              | v1.30.2           | 56ce0fd9fb532 | 117MB  |
| registry.k8s.io/kube-scheduler              | v1.30.2           | 7820c83aa1394 | 62MB   |
| registry.k8s.io/kube-proxy                  | v1.30.2           | 53c535741fb44 | 84.7MB |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| docker.io/localhost/my-image                | functional-192943 | 5d9df932808b6 | 1.24MB |
| registry.k8s.io/etcd                        | 3.5.12-0          | 3861cfcd7c04c | 149MB  |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| gcr.io/google-containers/addon-resizer      | functional-192943 | ffd4cfbbe753e | 32.9MB |
| docker.io/library/nginx                     | latest            | fffffc90d343c | 188MB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| gcr.io/k8s-minikube/busybox                 | latest            | beae173ccac6a | 1.24MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-192943 image ls --format table --alsologtostderr:
I0709 16:54:26.630127   25112 out.go:291] Setting OutFile to fd 1 ...
I0709 16:54:26.631188   25112 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0709 16:54:26.631243   25112 out.go:304] Setting ErrFile to fd 2...
I0709 16:54:26.631263   25112 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0709 16:54:26.631829   25112 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19199-7540/.minikube/bin
I0709 16:54:26.632660   25112 config.go:182] Loaded profile config "functional-192943": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0709 16:54:26.632806   25112 config.go:182] Loaded profile config "functional-192943": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0709 16:54:26.633315   25112 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0709 16:54:26.633372   25112 main.go:141] libmachine: Launching plugin server for driver kvm2
I0709 16:54:26.648163   25112 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43079
I0709 16:54:26.648703   25112 main.go:141] libmachine: () Calling .GetVersion
I0709 16:54:26.649257   25112 main.go:141] libmachine: Using API Version  1
I0709 16:54:26.649279   25112 main.go:141] libmachine: () Calling .SetConfigRaw
I0709 16:54:26.649632   25112 main.go:141] libmachine: () Calling .GetMachineName
I0709 16:54:26.649825   25112 main.go:141] libmachine: (functional-192943) Calling .GetState
I0709 16:54:26.651690   25112 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0709 16:54:26.651723   25112 main.go:141] libmachine: Launching plugin server for driver kvm2
I0709 16:54:26.666164   25112 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44243
I0709 16:54:26.666575   25112 main.go:141] libmachine: () Calling .GetVersion
I0709 16:54:26.667011   25112 main.go:141] libmachine: Using API Version  1
I0709 16:54:26.667031   25112 main.go:141] libmachine: () Calling .SetConfigRaw
I0709 16:54:26.667343   25112 main.go:141] libmachine: () Calling .GetMachineName
I0709 16:54:26.667549   25112 main.go:141] libmachine: (functional-192943) Calling .DriverName
I0709 16:54:26.667770   25112 ssh_runner.go:195] Run: systemctl --version
I0709 16:54:26.667790   25112 main.go:141] libmachine: (functional-192943) Calling .GetSSHHostname
I0709 16:54:26.670212   25112 main.go:141] libmachine: (functional-192943) DBG | domain functional-192943 has defined MAC address 52:54:00:7e:d3:ab in network mk-functional-192943
I0709 16:54:26.670595   25112 main.go:141] libmachine: (functional-192943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d3:ab", ip: ""} in network mk-functional-192943: {Iface:virbr1 ExpiryTime:2024-07-09 17:51:13 +0000 UTC Type:0 Mac:52:54:00:7e:d3:ab Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:functional-192943 Clientid:01:52:54:00:7e:d3:ab}
I0709 16:54:26.670625   25112 main.go:141] libmachine: (functional-192943) DBG | domain functional-192943 has defined IP address 192.168.39.218 and MAC address 52:54:00:7e:d3:ab in network mk-functional-192943
I0709 16:54:26.670763   25112 main.go:141] libmachine: (functional-192943) Calling .GetSSHPort
I0709 16:54:26.670914   25112 main.go:141] libmachine: (functional-192943) Calling .GetSSHKeyPath
I0709 16:54:26.671040   25112 main.go:141] libmachine: (functional-192943) Calling .GetSSHUsername
I0709 16:54:26.671136   25112 sshutil.go:53] new ssh client: &{IP:192.168.39.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19199-7540/.minikube/machines/functional-192943/id_rsa Username:docker}
I0709 16:54:26.758679   25112 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0709 16:54:26.793918   25112 main.go:141] libmachine: Making call to close driver server
I0709 16:54:26.793937   25112 main.go:141] libmachine: (functional-192943) Calling .Close
I0709 16:54:26.794258   25112 main.go:141] libmachine: Successfully made call to close driver server
I0709 16:54:26.794284   25112 main.go:141] libmachine: Making call to close connection to plugin binary
I0709 16:54:26.794292   25112 main.go:141] libmachine: Making call to close driver server
I0709 16:54:26.794299   25112 main.go:141] libmachine: (functional-192943) Calling .Close
I0709 16:54:26.794308   25112 main.go:141] libmachine: (functional-192943) DBG | Closing plugin on server side
I0709 16:54:26.794511   25112 main.go:141] libmachine: Successfully made call to close driver server
I0709 16:54:26.794527   25112 main.go:141] libmachine: Making call to close connection to plugin binary
I0709 16:54:26.794546   25112 main.go:141] libmachine: (functional-192943) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-192943 image ls --format json --alsologtostderr:
[{"id":"5d9df932808b6407f057b5bdaa7dfa6d9237841758ea792f8649fc6aa75b045d","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-192943"],"size":"1240000"},{"id":"7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.2"],"size":"62000000"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"59800000"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1240000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133e
aa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.2"],"size":"111000000"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"149000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-192943"],"size":"32900000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],
"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"f14f79c2f03a9b1eabd258bb9ef6e058b69e4aae49f9a6493dbd528923370619","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-192943"],"size":"30"},{"id":"fffffc90d343cbcb01a5032edac86db5998c536cd0a366514121a45c6723765c","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.2"],"size":"117000000"},{"id":"53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.30.2"],"size":"84700000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":
"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-192943 image ls --format json --alsologtostderr:
I0709 16:54:26.419308   25089 out.go:291] Setting OutFile to fd 1 ...
I0709 16:54:26.419451   25089 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0709 16:54:26.419466   25089 out.go:304] Setting ErrFile to fd 2...
I0709 16:54:26.419472   25089 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0709 16:54:26.419664   25089 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19199-7540/.minikube/bin
I0709 16:54:26.420219   25089 config.go:182] Loaded profile config "functional-192943": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0709 16:54:26.420313   25089 config.go:182] Loaded profile config "functional-192943": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0709 16:54:26.420656   25089 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0709 16:54:26.420708   25089 main.go:141] libmachine: Launching plugin server for driver kvm2
I0709 16:54:26.435589   25089 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41639
I0709 16:54:26.436058   25089 main.go:141] libmachine: () Calling .GetVersion
I0709 16:54:26.436714   25089 main.go:141] libmachine: Using API Version  1
I0709 16:54:26.436743   25089 main.go:141] libmachine: () Calling .SetConfigRaw
I0709 16:54:26.437106   25089 main.go:141] libmachine: () Calling .GetMachineName
I0709 16:54:26.437371   25089 main.go:141] libmachine: (functional-192943) Calling .GetState
I0709 16:54:26.439197   25089 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0709 16:54:26.439239   25089 main.go:141] libmachine: Launching plugin server for driver kvm2
I0709 16:54:26.454618   25089 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36085
I0709 16:54:26.455172   25089 main.go:141] libmachine: () Calling .GetVersion
I0709 16:54:26.455708   25089 main.go:141] libmachine: Using API Version  1
I0709 16:54:26.455731   25089 main.go:141] libmachine: () Calling .SetConfigRaw
I0709 16:54:26.456036   25089 main.go:141] libmachine: () Calling .GetMachineName
I0709 16:54:26.456265   25089 main.go:141] libmachine: (functional-192943) Calling .DriverName
I0709 16:54:26.456475   25089 ssh_runner.go:195] Run: systemctl --version
I0709 16:54:26.456496   25089 main.go:141] libmachine: (functional-192943) Calling .GetSSHHostname
I0709 16:54:26.459085   25089 main.go:141] libmachine: (functional-192943) DBG | domain functional-192943 has defined MAC address 52:54:00:7e:d3:ab in network mk-functional-192943
I0709 16:54:26.459481   25089 main.go:141] libmachine: (functional-192943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d3:ab", ip: ""} in network mk-functional-192943: {Iface:virbr1 ExpiryTime:2024-07-09 17:51:13 +0000 UTC Type:0 Mac:52:54:00:7e:d3:ab Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:functional-192943 Clientid:01:52:54:00:7e:d3:ab}
I0709 16:54:26.459506   25089 main.go:141] libmachine: (functional-192943) DBG | domain functional-192943 has defined IP address 192.168.39.218 and MAC address 52:54:00:7e:d3:ab in network mk-functional-192943
I0709 16:54:26.459820   25089 main.go:141] libmachine: (functional-192943) Calling .GetSSHPort
I0709 16:54:26.460000   25089 main.go:141] libmachine: (functional-192943) Calling .GetSSHKeyPath
I0709 16:54:26.460165   25089 main.go:141] libmachine: (functional-192943) Calling .GetSSHUsername
I0709 16:54:26.460289   25089 sshutil.go:53] new ssh client: &{IP:192.168.39.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19199-7540/.minikube/machines/functional-192943/id_rsa Username:docker}
I0709 16:54:26.551137   25089 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0709 16:54:26.578860   25089 main.go:141] libmachine: Making call to close driver server
I0709 16:54:26.578874   25089 main.go:141] libmachine: (functional-192943) Calling .Close
I0709 16:54:26.579131   25089 main.go:141] libmachine: (functional-192943) DBG | Closing plugin on server side
I0709 16:54:26.579144   25089 main.go:141] libmachine: Successfully made call to close driver server
I0709 16:54:26.579156   25089 main.go:141] libmachine: Making call to close connection to plugin binary
I0709 16:54:26.579170   25089 main.go:141] libmachine: Making call to close driver server
I0709 16:54:26.579177   25089 main.go:141] libmachine: (functional-192943) Calling .Close
I0709 16:54:26.579418   25089 main.go:141] libmachine: Successfully made call to close driver server
I0709 16:54:26.579441   25089 main.go:141] libmachine: Making call to close connection to plugin binary
I0709 16:54:26.579444   25089 main.go:141] libmachine: (functional-192943) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-192943 image ls --format yaml --alsologtostderr:
- id: 56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.2
size: "117000000"
- id: 53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.30.2
size: "84700000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-192943
size: "32900000"
- id: fffffc90d343cbcb01a5032edac86db5998c536cd0a366514121a45c6723765c
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.2
size: "111000000"
- id: 7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.2
size: "62000000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "149000000"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "59800000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: f14f79c2f03a9b1eabd258bb9ef6e058b69e4aae49f9a6493dbd528923370619
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-192943
size: "30"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-192943 image ls --format yaml --alsologtostderr:
I0709 16:54:22.401835   24963 out.go:291] Setting OutFile to fd 1 ...
I0709 16:54:22.402422   24963 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0709 16:54:22.402484   24963 out.go:304] Setting ErrFile to fd 2...
I0709 16:54:22.402503   24963 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0709 16:54:22.402976   24963 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19199-7540/.minikube/bin
I0709 16:54:22.403936   24963 config.go:182] Loaded profile config "functional-192943": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0709 16:54:22.404048   24963 config.go:182] Loaded profile config "functional-192943": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0709 16:54:22.404410   24963 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0709 16:54:22.404455   24963 main.go:141] libmachine: Launching plugin server for driver kvm2
I0709 16:54:22.419849   24963 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41497
I0709 16:54:22.420313   24963 main.go:141] libmachine: () Calling .GetVersion
I0709 16:54:22.420898   24963 main.go:141] libmachine: Using API Version  1
I0709 16:54:22.420926   24963 main.go:141] libmachine: () Calling .SetConfigRaw
I0709 16:54:22.421293   24963 main.go:141] libmachine: () Calling .GetMachineName
I0709 16:54:22.421476   24963 main.go:141] libmachine: (functional-192943) Calling .GetState
I0709 16:54:22.423265   24963 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0709 16:54:22.423301   24963 main.go:141] libmachine: Launching plugin server for driver kvm2
I0709 16:54:22.437972   24963 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36439
I0709 16:54:22.438435   24963 main.go:141] libmachine: () Calling .GetVersion
I0709 16:54:22.438887   24963 main.go:141] libmachine: Using API Version  1
I0709 16:54:22.438906   24963 main.go:141] libmachine: () Calling .SetConfigRaw
I0709 16:54:22.439212   24963 main.go:141] libmachine: () Calling .GetMachineName
I0709 16:54:22.439403   24963 main.go:141] libmachine: (functional-192943) Calling .DriverName
I0709 16:54:22.439609   24963 ssh_runner.go:195] Run: systemctl --version
I0709 16:54:22.439638   24963 main.go:141] libmachine: (functional-192943) Calling .GetSSHHostname
I0709 16:54:22.442301   24963 main.go:141] libmachine: (functional-192943) DBG | domain functional-192943 has defined MAC address 52:54:00:7e:d3:ab in network mk-functional-192943
I0709 16:54:22.442649   24963 main.go:141] libmachine: (functional-192943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d3:ab", ip: ""} in network mk-functional-192943: {Iface:virbr1 ExpiryTime:2024-07-09 17:51:13 +0000 UTC Type:0 Mac:52:54:00:7e:d3:ab Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:functional-192943 Clientid:01:52:54:00:7e:d3:ab}
I0709 16:54:22.442679   24963 main.go:141] libmachine: (functional-192943) DBG | domain functional-192943 has defined IP address 192.168.39.218 and MAC address 52:54:00:7e:d3:ab in network mk-functional-192943
I0709 16:54:22.442807   24963 main.go:141] libmachine: (functional-192943) Calling .GetSSHPort
I0709 16:54:22.443001   24963 main.go:141] libmachine: (functional-192943) Calling .GetSSHKeyPath
I0709 16:54:22.443125   24963 main.go:141] libmachine: (functional-192943) Calling .GetSSHUsername
I0709 16:54:22.443277   24963 sshutil.go:53] new ssh client: &{IP:192.168.39.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19199-7540/.minikube/machines/functional-192943/id_rsa Username:docker}
I0709 16:54:22.531693   24963 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0709 16:54:22.566357   24963 main.go:141] libmachine: Making call to close driver server
I0709 16:54:22.566371   24963 main.go:141] libmachine: (functional-192943) Calling .Close
I0709 16:54:22.566644   24963 main.go:141] libmachine: (functional-192943) DBG | Closing plugin on server side
I0709 16:54:22.566692   24963 main.go:141] libmachine: Successfully made call to close driver server
I0709 16:54:22.566701   24963 main.go:141] libmachine: Making call to close connection to plugin binary
I0709 16:54:22.566711   24963 main.go:141] libmachine: Making call to close driver server
I0709 16:54:22.566725   24963 main.go:141] libmachine: (functional-192943) Calling .Close
I0709 16:54:22.566962   24963 main.go:141] libmachine: (functional-192943) DBG | Closing plugin on server side
I0709 16:54:22.567013   24963 main.go:141] libmachine: Successfully made call to close driver server
I0709 16:54:22.567036   24963 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-192943 ssh pgrep buildkitd: exit status 1 (191.876438ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 image build -t localhost/my-image:functional-192943 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-192943 image build -t localhost/my-image:functional-192943 testdata/build --alsologtostderr: (3.389424501s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-192943 image build -t localhost/my-image:functional-192943 testdata/build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 6f0cbcaf87e6
---> Removed intermediate container 6f0cbcaf87e6
---> 827ae89c020c
Step 3/3 : ADD content.txt /
---> 5d9df932808b
Successfully built 5d9df932808b
Successfully tagged localhost/my-image:functional-192943
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-192943 image build -t localhost/my-image:functional-192943 testdata/build --alsologtostderr:
I0709 16:54:22.807360   25016 out.go:291] Setting OutFile to fd 1 ...
I0709 16:54:22.807523   25016 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0709 16:54:22.807533   25016 out.go:304] Setting ErrFile to fd 2...
I0709 16:54:22.807540   25016 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0709 16:54:22.807705   25016 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19199-7540/.minikube/bin
I0709 16:54:22.808284   25016 config.go:182] Loaded profile config "functional-192943": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0709 16:54:22.808810   25016 config.go:182] Loaded profile config "functional-192943": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0709 16:54:22.809166   25016 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0709 16:54:22.809231   25016 main.go:141] libmachine: Launching plugin server for driver kvm2
I0709 16:54:22.823494   25016 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38841
I0709 16:54:22.823951   25016 main.go:141] libmachine: () Calling .GetVersion
I0709 16:54:22.824521   25016 main.go:141] libmachine: Using API Version  1
I0709 16:54:22.824546   25016 main.go:141] libmachine: () Calling .SetConfigRaw
I0709 16:54:22.824906   25016 main.go:141] libmachine: () Calling .GetMachineName
I0709 16:54:22.825123   25016 main.go:141] libmachine: (functional-192943) Calling .GetState
I0709 16:54:22.827157   25016 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0709 16:54:22.827193   25016 main.go:141] libmachine: Launching plugin server for driver kvm2
I0709 16:54:22.842398   25016 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41391
I0709 16:54:22.842818   25016 main.go:141] libmachine: () Calling .GetVersion
I0709 16:54:22.843349   25016 main.go:141] libmachine: Using API Version  1
I0709 16:54:22.843375   25016 main.go:141] libmachine: () Calling .SetConfigRaw
I0709 16:54:22.843730   25016 main.go:141] libmachine: () Calling .GetMachineName
I0709 16:54:22.843930   25016 main.go:141] libmachine: (functional-192943) Calling .DriverName
I0709 16:54:22.844153   25016 ssh_runner.go:195] Run: systemctl --version
I0709 16:54:22.844186   25016 main.go:141] libmachine: (functional-192943) Calling .GetSSHHostname
I0709 16:54:22.847046   25016 main.go:141] libmachine: (functional-192943) DBG | domain functional-192943 has defined MAC address 52:54:00:7e:d3:ab in network mk-functional-192943
I0709 16:54:22.847448   25016 main.go:141] libmachine: (functional-192943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d3:ab", ip: ""} in network mk-functional-192943: {Iface:virbr1 ExpiryTime:2024-07-09 17:51:13 +0000 UTC Type:0 Mac:52:54:00:7e:d3:ab Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:functional-192943 Clientid:01:52:54:00:7e:d3:ab}
I0709 16:54:22.847485   25016 main.go:141] libmachine: (functional-192943) DBG | domain functional-192943 has defined IP address 192.168.39.218 and MAC address 52:54:00:7e:d3:ab in network mk-functional-192943
I0709 16:54:22.847604   25016 main.go:141] libmachine: (functional-192943) Calling .GetSSHPort
I0709 16:54:22.847782   25016 main.go:141] libmachine: (functional-192943) Calling .GetSSHKeyPath
I0709 16:54:22.848043   25016 main.go:141] libmachine: (functional-192943) Calling .GetSSHUsername
I0709 16:54:22.848242   25016 sshutil.go:53] new ssh client: &{IP:192.168.39.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19199-7540/.minikube/machines/functional-192943/id_rsa Username:docker}
I0709 16:54:22.936244   25016 build_images.go:161] Building image from path: /tmp/build.1849126453.tar
I0709 16:54:22.936316   25016 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0709 16:54:22.967224   25016 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1849126453.tar
I0709 16:54:22.973458   25016 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1849126453.tar: stat -c "%s %y" /var/lib/minikube/build/build.1849126453.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1849126453.tar': No such file or directory
I0709 16:54:22.973493   25016 ssh_runner.go:362] scp /tmp/build.1849126453.tar --> /var/lib/minikube/build/build.1849126453.tar (3072 bytes)
I0709 16:54:23.012472   25016 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1849126453
I0709 16:54:23.027257   25016 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1849126453 -xf /var/lib/minikube/build/build.1849126453.tar
I0709 16:54:23.041649   25016 docker.go:360] Building image: /var/lib/minikube/build/build.1849126453
I0709 16:54:23.041716   25016 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-192943 /var/lib/minikube/build/build.1849126453
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0709 16:54:26.085775   25016 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-192943 /var/lib/minikube/build/build.1849126453: (3.044030149s)
I0709 16:54:26.085868   25016 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1849126453
I0709 16:54:26.102652   25016 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1849126453.tar
I0709 16:54:26.147883   25016 build_images.go:217] Built localhost/my-image:functional-192943 from /tmp/build.1849126453.tar
I0709 16:54:26.147921   25016 build_images.go:133] succeeded building to: functional-192943
I0709 16:54:26.147927   25016 build_images.go:134] failed building to: 
I0709 16:54:26.147952   25016 main.go:141] libmachine: Making call to close driver server
I0709 16:54:26.147964   25016 main.go:141] libmachine: (functional-192943) Calling .Close
I0709 16:54:26.148295   25016 main.go:141] libmachine: Successfully made call to close driver server
I0709 16:54:26.148316   25016 main.go:141] libmachine: Making call to close connection to plugin binary
I0709 16:54:26.148326   25016 main.go:141] libmachine: Making call to close driver server
I0709 16:54:26.148337   25016 main.go:141] libmachine: (functional-192943) Calling .Close
I0709 16:54:26.148614   25016 main.go:141] libmachine: Successfully made call to close driver server
I0709 16:54:26.148631   25016 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.343566447s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-192943
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 image load --daemon gcr.io/google-containers/addon-resizer:functional-192943 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-192943 image load --daemon gcr.io/google-containers/addon-resizer:functional-192943 --alsologtostderr: (4.585379428s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 image load --daemon gcr.io/google-containers/addon-resizer:functional-192943 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-192943 image load --daemon gcr.io/google-containers/addon-resizer:functional-192943 --alsologtostderr: (2.538721227s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.32876152s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-192943
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 image load --daemon gcr.io/google-containers/addon-resizer:functional-192943 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-192943 image load --daemon gcr.io/google-containers/addon-resizer:functional-192943 --alsologtostderr: (4.322259266s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 image save gcr.io/google-containers/addon-resizer:functional-192943 /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-192943 image save gcr.io/google-containers/addon-resizer:functional-192943 /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr: (1.650947325s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 image rm gcr.io/google-containers/addon-resizer:functional-192943 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 image load /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-192943 image load /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr: (1.387656021s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-192943
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 image save --daemon gcr.io/google-containers/addon-resizer:functional-192943 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-192943 image save --daemon gcr.io/google-containers/addon-resizer:functional-192943 --alsologtostderr: (1.886159687s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-192943
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.93s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-192943 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.59s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-192943
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-192943
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-192943
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestGvisorAddon (233.1s)

                                                
                                                
=== RUN   TestGvisorAddon
=== PAUSE TestGvisorAddon

                                                
                                                

                                                
                                                
=== CONT  TestGvisorAddon
gvisor_addon_test.go:52: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-789900 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:52: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-789900 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (2m3.36767894s)
gvisor_addon_test.go:58: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-789900 cache add gcr.io/k8s-minikube/gvisor-addon:2
gvisor_addon_test.go:58: (dbg) Done: out/minikube-linux-amd64 -p gvisor-789900 cache add gcr.io/k8s-minikube/gvisor-addon:2: (24.230208245s)
gvisor_addon_test.go:63: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-789900 addons enable gvisor
gvisor_addon_test.go:63: (dbg) Done: out/minikube-linux-amd64 -p gvisor-789900 addons enable gvisor: (4.142434661s)
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [de5fc157-5034-41ea-87c3-f781acda8fde] Running
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 6.008107655s
gvisor_addon_test.go:73: (dbg) Run:  kubectl --context gvisor-789900 replace --force -f testdata/nginx-gvisor.yaml
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [8bea56a4-172f-40b4-9ba7-e248ce519d09] Pending
helpers_test.go:344: "nginx-gvisor" [8bea56a4-172f-40b4-9ba7-e248ce519d09] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-gvisor" [8bea56a4-172f-40b4-9ba7-e248ce519d09] Running
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 15.005547461s
gvisor_addon_test.go:83: (dbg) Run:  out/minikube-linux-amd64 stop -p gvisor-789900
gvisor_addon_test.go:83: (dbg) Done: out/minikube-linux-amd64 stop -p gvisor-789900: (7.311898176s)
gvisor_addon_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-789900 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-789900 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (40.736545687s)
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [de5fc157-5034-41ea-87c3-f781acda8fde] Running / Ready:ContainersNotReady (containers with unready status: [gvisor]) / ContainersReady:ContainersNotReady (containers with unready status: [gvisor])
helpers_test.go:344: "gvisor" [de5fc157-5034-41ea-87c3-f781acda8fde] Running
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 6.005152766s
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [8bea56a4-172f-40b4-9ba7-e248ce519d09] Running / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 5.007020787s
helpers_test.go:175: Cleaning up "gvisor-789900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p gvisor-789900
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p gvisor-789900: (1.065509245s)
--- PASS: TestGvisorAddon (233.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (216.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-528154 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2 
E0709 16:55:31.910746   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/addons-470383/client.crt: no such file or directory
E0709 16:55:59.595399   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/addons-470383/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-528154 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2 : (3m35.426296413s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-528154 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (216.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-528154 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-528154 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-528154 -- rollout status deployment/busybox: (3.997920172s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-528154 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-528154 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-528154 -- exec busybox-fc5497c4f-477zd -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-528154 -- exec busybox-fc5497c4f-5qtlx -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-528154 -- exec busybox-fc5497c4f-9mmgr -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-528154 -- exec busybox-fc5497c4f-477zd -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-528154 -- exec busybox-fc5497c4f-5qtlx -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-528154 -- exec busybox-fc5497c4f-9mmgr -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-528154 -- exec busybox-fc5497c4f-477zd -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-528154 -- exec busybox-fc5497c4f-5qtlx -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-528154 -- exec busybox-fc5497c4f-9mmgr -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-528154 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-528154 -- exec busybox-fc5497c4f-477zd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-528154 -- exec busybox-fc5497c4f-477zd -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-528154 -- exec busybox-fc5497c4f-5qtlx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-528154 -- exec busybox-fc5497c4f-5qtlx -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-528154 -- exec busybox-fc5497c4f-9mmgr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-528154 -- exec busybox-fc5497c4f-9mmgr -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (50.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-528154 -v=7 --alsologtostderr
E0709 16:58:44.779646   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/functional-192943/client.crt: no such file or directory
E0709 16:58:44.785010   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/functional-192943/client.crt: no such file or directory
E0709 16:58:44.795335   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/functional-192943/client.crt: no such file or directory
E0709 16:58:44.816461   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/functional-192943/client.crt: no such file or directory
E0709 16:58:44.856646   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/functional-192943/client.crt: no such file or directory
E0709 16:58:44.936945   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/functional-192943/client.crt: no such file or directory
E0709 16:58:45.098127   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/functional-192943/client.crt: no such file or directory
E0709 16:58:45.419047   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/functional-192943/client.crt: no such file or directory
E0709 16:58:46.060162   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/functional-192943/client.crt: no such file or directory
E0709 16:58:47.340303   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/functional-192943/client.crt: no such file or directory
E0709 16:58:49.900545   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/functional-192943/client.crt: no such file or directory
E0709 16:58:55.020870   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/functional-192943/client.crt: no such file or directory
E0709 16:59:05.261792   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/functional-192943/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-528154 -v=7 --alsologtostderr: (49.221643218s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-528154 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (50.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-528154 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-528154 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-528154 cp testdata/cp-test.txt ha-528154:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-528154 ssh -n ha-528154 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-528154 cp ha-528154:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1463534532/001/cp-test_ha-528154.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-528154 ssh -n ha-528154 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-528154 cp ha-528154:/home/docker/cp-test.txt ha-528154-m02:/home/docker/cp-test_ha-528154_ha-528154-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-528154 ssh -n ha-528154 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-528154 ssh -n ha-528154-m02 "sudo cat /home/docker/cp-test_ha-528154_ha-528154-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-528154 cp ha-528154:/home/docker/cp-test.txt ha-528154-m03:/home/docker/cp-test_ha-528154_ha-528154-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-528154 ssh -n ha-528154 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-528154 ssh -n ha-528154-m03 "sudo cat /home/docker/cp-test_ha-528154_ha-528154-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-528154 cp ha-528154:/home/docker/cp-test.txt ha-528154-m04:/home/docker/cp-test_ha-528154_ha-528154-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-528154 ssh -n ha-528154 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-528154 ssh -n ha-528154-m04 "sudo cat /home/docker/cp-test_ha-528154_ha-528154-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-528154 cp testdata/cp-test.txt ha-528154-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-528154 ssh -n ha-528154-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-528154 cp ha-528154-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1463534532/001/cp-test_ha-528154-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-528154 ssh -n ha-528154-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-528154 cp ha-528154-m02:/home/docker/cp-test.txt ha-528154:/home/docker/cp-test_ha-528154-m02_ha-528154.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-528154 ssh -n ha-528154-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-528154 ssh -n ha-528154 "sudo cat /home/docker/cp-test_ha-528154-m02_ha-528154.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-528154 cp ha-528154-m02:/home/docker/cp-test.txt ha-528154-m03:/home/docker/cp-test_ha-528154-m02_ha-528154-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-528154 ssh -n ha-528154-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-528154 ssh -n ha-528154-m03 "sudo cat /home/docker/cp-test_ha-528154-m02_ha-528154-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-528154 cp ha-528154-m02:/home/docker/cp-test.txt ha-528154-m04:/home/docker/cp-test_ha-528154-m02_ha-528154-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-528154 ssh -n ha-528154-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-528154 ssh -n ha-528154-m04 "sudo cat /home/docker/cp-test_ha-528154-m02_ha-528154-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-528154 cp testdata/cp-test.txt ha-528154-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-528154 ssh -n ha-528154-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-528154 cp ha-528154-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1463534532/001/cp-test_ha-528154-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-528154 ssh -n ha-528154-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-528154 cp ha-528154-m03:/home/docker/cp-test.txt ha-528154:/home/docker/cp-test_ha-528154-m03_ha-528154.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-528154 ssh -n ha-528154-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-528154 ssh -n ha-528154 "sudo cat /home/docker/cp-test_ha-528154-m03_ha-528154.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-528154 cp ha-528154-m03:/home/docker/cp-test.txt ha-528154-m02:/home/docker/cp-test_ha-528154-m03_ha-528154-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-528154 ssh -n ha-528154-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-528154 ssh -n ha-528154-m02 "sudo cat /home/docker/cp-test_ha-528154-m03_ha-528154-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-528154 cp ha-528154-m03:/home/docker/cp-test.txt ha-528154-m04:/home/docker/cp-test_ha-528154-m03_ha-528154-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-528154 ssh -n ha-528154-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-528154 ssh -n ha-528154-m04 "sudo cat /home/docker/cp-test_ha-528154-m03_ha-528154-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-528154 cp testdata/cp-test.txt ha-528154-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-528154 ssh -n ha-528154-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-528154 cp ha-528154-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1463534532/001/cp-test_ha-528154-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-528154 ssh -n ha-528154-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-528154 cp ha-528154-m04:/home/docker/cp-test.txt ha-528154:/home/docker/cp-test_ha-528154-m04_ha-528154.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-528154 ssh -n ha-528154-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-528154 ssh -n ha-528154 "sudo cat /home/docker/cp-test_ha-528154-m04_ha-528154.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-528154 cp ha-528154-m04:/home/docker/cp-test.txt ha-528154-m02:/home/docker/cp-test_ha-528154-m04_ha-528154-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-528154 ssh -n ha-528154-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-528154 ssh -n ha-528154-m02 "sudo cat /home/docker/cp-test_ha-528154-m04_ha-528154-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-528154 cp ha-528154-m04:/home/docker/cp-test.txt ha-528154-m03:/home/docker/cp-test_ha-528154-m04_ha-528154-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-528154 ssh -n ha-528154-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-528154 ssh -n ha-528154-m03 "sudo cat /home/docker/cp-test_ha-528154-m04_ha-528154-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-528154 node stop m02 -v=7 --alsologtostderr
E0709 16:59:25.742341   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/functional-192943/client.crt: no such file or directory
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-528154 node stop m02 -v=7 --alsologtostderr: (12.633064524s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-528154 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-528154 status -v=7 --alsologtostderr: exit status 7 (661.342918ms)

                                                
                                                
-- stdout --
	ha-528154
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-528154-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-528154-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-528154-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0709 16:59:37.477902   29482 out.go:291] Setting OutFile to fd 1 ...
	I0709 16:59:37.478144   29482 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0709 16:59:37.478154   29482 out.go:304] Setting ErrFile to fd 2...
	I0709 16:59:37.478160   29482 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0709 16:59:37.478350   29482 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19199-7540/.minikube/bin
	I0709 16:59:37.478530   29482 out.go:298] Setting JSON to false
	I0709 16:59:37.478562   29482 mustload.go:65] Loading cluster: ha-528154
	I0709 16:59:37.478613   29482 notify.go:220] Checking for updates...
	I0709 16:59:37.479025   29482 config.go:182] Loaded profile config "ha-528154": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 16:59:37.479042   29482 status.go:255] checking status of ha-528154 ...
	I0709 16:59:37.479450   29482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0709 16:59:37.479530   29482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0709 16:59:37.498501   29482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33171
	I0709 16:59:37.498923   29482 main.go:141] libmachine: () Calling .GetVersion
	I0709 16:59:37.499719   29482 main.go:141] libmachine: Using API Version  1
	I0709 16:59:37.499752   29482 main.go:141] libmachine: () Calling .SetConfigRaw
	I0709 16:59:37.500111   29482 main.go:141] libmachine: () Calling .GetMachineName
	I0709 16:59:37.500335   29482 main.go:141] libmachine: (ha-528154) Calling .GetState
	I0709 16:59:37.502012   29482 status.go:330] ha-528154 host status = "Running" (err=<nil>)
	I0709 16:59:37.502026   29482 host.go:66] Checking if "ha-528154" exists ...
	I0709 16:59:37.502327   29482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0709 16:59:37.502365   29482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0709 16:59:37.517026   29482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46053
	I0709 16:59:37.517516   29482 main.go:141] libmachine: () Calling .GetVersion
	I0709 16:59:37.518032   29482 main.go:141] libmachine: Using API Version  1
	I0709 16:59:37.518060   29482 main.go:141] libmachine: () Calling .SetConfigRaw
	I0709 16:59:37.518380   29482 main.go:141] libmachine: () Calling .GetMachineName
	I0709 16:59:37.518563   29482 main.go:141] libmachine: (ha-528154) Calling .GetIP
	I0709 16:59:37.521410   29482 main.go:141] libmachine: (ha-528154) DBG | domain ha-528154 has defined MAC address 52:54:00:a4:d4:5c in network mk-ha-528154
	I0709 16:59:37.521853   29482 main.go:141] libmachine: (ha-528154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:d4:5c", ip: ""} in network mk-ha-528154: {Iface:virbr1 ExpiryTime:2024-07-09 17:54:51 +0000 UTC Type:0 Mac:52:54:00:a4:d4:5c Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-528154 Clientid:01:52:54:00:a4:d4:5c}
	I0709 16:59:37.521887   29482 main.go:141] libmachine: (ha-528154) DBG | domain ha-528154 has defined IP address 192.168.39.92 and MAC address 52:54:00:a4:d4:5c in network mk-ha-528154
	I0709 16:59:37.522000   29482 host.go:66] Checking if "ha-528154" exists ...
	I0709 16:59:37.522298   29482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0709 16:59:37.522336   29482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0709 16:59:37.537216   29482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44563
	I0709 16:59:37.537696   29482 main.go:141] libmachine: () Calling .GetVersion
	I0709 16:59:37.538141   29482 main.go:141] libmachine: Using API Version  1
	I0709 16:59:37.538164   29482 main.go:141] libmachine: () Calling .SetConfigRaw
	I0709 16:59:37.538501   29482 main.go:141] libmachine: () Calling .GetMachineName
	I0709 16:59:37.538731   29482 main.go:141] libmachine: (ha-528154) Calling .DriverName
	I0709 16:59:37.538902   29482 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0709 16:59:37.538928   29482 main.go:141] libmachine: (ha-528154) Calling .GetSSHHostname
	I0709 16:59:37.542051   29482 main.go:141] libmachine: (ha-528154) DBG | domain ha-528154 has defined MAC address 52:54:00:a4:d4:5c in network mk-ha-528154
	I0709 16:59:37.542518   29482 main.go:141] libmachine: (ha-528154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:d4:5c", ip: ""} in network mk-ha-528154: {Iface:virbr1 ExpiryTime:2024-07-09 17:54:51 +0000 UTC Type:0 Mac:52:54:00:a4:d4:5c Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-528154 Clientid:01:52:54:00:a4:d4:5c}
	I0709 16:59:37.542542   29482 main.go:141] libmachine: (ha-528154) DBG | domain ha-528154 has defined IP address 192.168.39.92 and MAC address 52:54:00:a4:d4:5c in network mk-ha-528154
	I0709 16:59:37.542764   29482 main.go:141] libmachine: (ha-528154) Calling .GetSSHPort
	I0709 16:59:37.542962   29482 main.go:141] libmachine: (ha-528154) Calling .GetSSHKeyPath
	I0709 16:59:37.543105   29482 main.go:141] libmachine: (ha-528154) Calling .GetSSHUsername
	I0709 16:59:37.543256   29482 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19199-7540/.minikube/machines/ha-528154/id_rsa Username:docker}
	I0709 16:59:37.629112   29482 ssh_runner.go:195] Run: systemctl --version
	I0709 16:59:37.636975   29482 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0709 16:59:37.658987   29482 kubeconfig.go:125] found "ha-528154" server: "https://192.168.39.254:8443"
	I0709 16:59:37.659016   29482 api_server.go:166] Checking apiserver status ...
	I0709 16:59:37.659048   29482 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0709 16:59:37.685342   29482 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1998/cgroup
	W0709 16:59:37.696300   29482 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1998/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0709 16:59:37.696362   29482 ssh_runner.go:195] Run: ls
	I0709 16:59:37.701371   29482 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0709 16:59:37.705490   29482 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0709 16:59:37.705515   29482 status.go:422] ha-528154 apiserver status = Running (err=<nil>)
	I0709 16:59:37.705524   29482 status.go:257] ha-528154 status: &{Name:ha-528154 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0709 16:59:37.705539   29482 status.go:255] checking status of ha-528154-m02 ...
	I0709 16:59:37.705872   29482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0709 16:59:37.705904   29482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0709 16:59:37.720671   29482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39519
	I0709 16:59:37.721104   29482 main.go:141] libmachine: () Calling .GetVersion
	I0709 16:59:37.721532   29482 main.go:141] libmachine: Using API Version  1
	I0709 16:59:37.721559   29482 main.go:141] libmachine: () Calling .SetConfigRaw
	I0709 16:59:37.721941   29482 main.go:141] libmachine: () Calling .GetMachineName
	I0709 16:59:37.722120   29482 main.go:141] libmachine: (ha-528154-m02) Calling .GetState
	I0709 16:59:37.723790   29482 status.go:330] ha-528154-m02 host status = "Stopped" (err=<nil>)
	I0709 16:59:37.723803   29482 status.go:343] host is not running, skipping remaining checks
	I0709 16:59:37.723809   29482 status.go:257] ha-528154-m02 status: &{Name:ha-528154-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0709 16:59:37.723828   29482 status.go:255] checking status of ha-528154-m03 ...
	I0709 16:59:37.724189   29482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0709 16:59:37.724230   29482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0709 16:59:37.739441   29482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46051
	I0709 16:59:37.739874   29482 main.go:141] libmachine: () Calling .GetVersion
	I0709 16:59:37.740495   29482 main.go:141] libmachine: Using API Version  1
	I0709 16:59:37.740537   29482 main.go:141] libmachine: () Calling .SetConfigRaw
	I0709 16:59:37.740870   29482 main.go:141] libmachine: () Calling .GetMachineName
	I0709 16:59:37.741072   29482 main.go:141] libmachine: (ha-528154-m03) Calling .GetState
	I0709 16:59:37.742790   29482 status.go:330] ha-528154-m03 host status = "Running" (err=<nil>)
	I0709 16:59:37.742808   29482 host.go:66] Checking if "ha-528154-m03" exists ...
	I0709 16:59:37.743153   29482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0709 16:59:37.743192   29482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0709 16:59:37.758677   29482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36329
	I0709 16:59:37.759081   29482 main.go:141] libmachine: () Calling .GetVersion
	I0709 16:59:37.759555   29482 main.go:141] libmachine: Using API Version  1
	I0709 16:59:37.759574   29482 main.go:141] libmachine: () Calling .SetConfigRaw
	I0709 16:59:37.759874   29482 main.go:141] libmachine: () Calling .GetMachineName
	I0709 16:59:37.760075   29482 main.go:141] libmachine: (ha-528154-m03) Calling .GetIP
	I0709 16:59:37.763374   29482 main.go:141] libmachine: (ha-528154-m03) DBG | domain ha-528154-m03 has defined MAC address 52:54:00:52:49:49 in network mk-ha-528154
	I0709 16:59:37.763782   29482 main.go:141] libmachine: (ha-528154-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:49:49", ip: ""} in network mk-ha-528154: {Iface:virbr1 ExpiryTime:2024-07-09 17:57:13 +0000 UTC Type:0 Mac:52:54:00:52:49:49 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-528154-m03 Clientid:01:52:54:00:52:49:49}
	I0709 16:59:37.763813   29482 main.go:141] libmachine: (ha-528154-m03) DBG | domain ha-528154-m03 has defined IP address 192.168.39.54 and MAC address 52:54:00:52:49:49 in network mk-ha-528154
	I0709 16:59:37.763946   29482 host.go:66] Checking if "ha-528154-m03" exists ...
	I0709 16:59:37.764273   29482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0709 16:59:37.764308   29482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0709 16:59:37.779115   29482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38919
	I0709 16:59:37.779499   29482 main.go:141] libmachine: () Calling .GetVersion
	I0709 16:59:37.779960   29482 main.go:141] libmachine: Using API Version  1
	I0709 16:59:37.779985   29482 main.go:141] libmachine: () Calling .SetConfigRaw
	I0709 16:59:37.780339   29482 main.go:141] libmachine: () Calling .GetMachineName
	I0709 16:59:37.780542   29482 main.go:141] libmachine: (ha-528154-m03) Calling .DriverName
	I0709 16:59:37.780728   29482 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0709 16:59:37.780746   29482 main.go:141] libmachine: (ha-528154-m03) Calling .GetSSHHostname
	I0709 16:59:37.783499   29482 main.go:141] libmachine: (ha-528154-m03) DBG | domain ha-528154-m03 has defined MAC address 52:54:00:52:49:49 in network mk-ha-528154
	I0709 16:59:37.783932   29482 main.go:141] libmachine: (ha-528154-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:49:49", ip: ""} in network mk-ha-528154: {Iface:virbr1 ExpiryTime:2024-07-09 17:57:13 +0000 UTC Type:0 Mac:52:54:00:52:49:49 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-528154-m03 Clientid:01:52:54:00:52:49:49}
	I0709 16:59:37.783956   29482 main.go:141] libmachine: (ha-528154-m03) DBG | domain ha-528154-m03 has defined IP address 192.168.39.54 and MAC address 52:54:00:52:49:49 in network mk-ha-528154
	I0709 16:59:37.784114   29482 main.go:141] libmachine: (ha-528154-m03) Calling .GetSSHPort
	I0709 16:59:37.784334   29482 main.go:141] libmachine: (ha-528154-m03) Calling .GetSSHKeyPath
	I0709 16:59:37.784490   29482 main.go:141] libmachine: (ha-528154-m03) Calling .GetSSHUsername
	I0709 16:59:37.784650   29482 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19199-7540/.minikube/machines/ha-528154-m03/id_rsa Username:docker}
	I0709 16:59:37.873945   29482 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0709 16:59:37.892600   29482 kubeconfig.go:125] found "ha-528154" server: "https://192.168.39.254:8443"
	I0709 16:59:37.892626   29482 api_server.go:166] Checking apiserver status ...
	I0709 16:59:37.892658   29482 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0709 16:59:37.912464   29482 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1930/cgroup
	W0709 16:59:37.926479   29482 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1930/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0709 16:59:37.926535   29482 ssh_runner.go:195] Run: ls
	I0709 16:59:37.931545   29482 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0709 16:59:37.935889   29482 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0709 16:59:37.935915   29482 status.go:422] ha-528154-m03 apiserver status = Running (err=<nil>)
	I0709 16:59:37.935928   29482 status.go:257] ha-528154-m03 status: &{Name:ha-528154-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0709 16:59:37.935961   29482 status.go:255] checking status of ha-528154-m04 ...
	I0709 16:59:37.936282   29482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0709 16:59:37.936326   29482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0709 16:59:37.951244   29482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36905
	I0709 16:59:37.951708   29482 main.go:141] libmachine: () Calling .GetVersion
	I0709 16:59:37.952296   29482 main.go:141] libmachine: Using API Version  1
	I0709 16:59:37.952316   29482 main.go:141] libmachine: () Calling .SetConfigRaw
	I0709 16:59:37.952638   29482 main.go:141] libmachine: () Calling .GetMachineName
	I0709 16:59:37.952828   29482 main.go:141] libmachine: (ha-528154-m04) Calling .GetState
	I0709 16:59:37.954427   29482 status.go:330] ha-528154-m04 host status = "Running" (err=<nil>)
	I0709 16:59:37.954446   29482 host.go:66] Checking if "ha-528154-m04" exists ...
	I0709 16:59:37.954809   29482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0709 16:59:37.954845   29482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0709 16:59:37.970576   29482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35157
	I0709 16:59:37.970971   29482 main.go:141] libmachine: () Calling .GetVersion
	I0709 16:59:37.971515   29482 main.go:141] libmachine: Using API Version  1
	I0709 16:59:37.971536   29482 main.go:141] libmachine: () Calling .SetConfigRaw
	I0709 16:59:37.971838   29482 main.go:141] libmachine: () Calling .GetMachineName
	I0709 16:59:37.972032   29482 main.go:141] libmachine: (ha-528154-m04) Calling .GetIP
	I0709 16:59:37.975286   29482 main.go:141] libmachine: (ha-528154-m04) DBG | domain ha-528154-m04 has defined MAC address 52:54:00:10:9f:30 in network mk-ha-528154
	I0709 16:59:37.975710   29482 main.go:141] libmachine: (ha-528154-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:9f:30", ip: ""} in network mk-ha-528154: {Iface:virbr1 ExpiryTime:2024-07-09 17:58:35 +0000 UTC Type:0 Mac:52:54:00:10:9f:30 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-528154-m04 Clientid:01:52:54:00:10:9f:30}
	I0709 16:59:37.975760   29482 main.go:141] libmachine: (ha-528154-m04) DBG | domain ha-528154-m04 has defined IP address 192.168.39.145 and MAC address 52:54:00:10:9f:30 in network mk-ha-528154
	I0709 16:59:37.975944   29482 host.go:66] Checking if "ha-528154-m04" exists ...
	I0709 16:59:37.976267   29482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0709 16:59:37.976313   29482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0709 16:59:37.991371   29482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36661
	I0709 16:59:37.991788   29482 main.go:141] libmachine: () Calling .GetVersion
	I0709 16:59:37.992332   29482 main.go:141] libmachine: Using API Version  1
	I0709 16:59:37.992358   29482 main.go:141] libmachine: () Calling .SetConfigRaw
	I0709 16:59:37.992656   29482 main.go:141] libmachine: () Calling .GetMachineName
	I0709 16:59:37.992863   29482 main.go:141] libmachine: (ha-528154-m04) Calling .DriverName
	I0709 16:59:37.993079   29482 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0709 16:59:37.993096   29482 main.go:141] libmachine: (ha-528154-m04) Calling .GetSSHHostname
	I0709 16:59:37.996001   29482 main.go:141] libmachine: (ha-528154-m04) DBG | domain ha-528154-m04 has defined MAC address 52:54:00:10:9f:30 in network mk-ha-528154
	I0709 16:59:37.996481   29482 main.go:141] libmachine: (ha-528154-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:9f:30", ip: ""} in network mk-ha-528154: {Iface:virbr1 ExpiryTime:2024-07-09 17:58:35 +0000 UTC Type:0 Mac:52:54:00:10:9f:30 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-528154-m04 Clientid:01:52:54:00:10:9f:30}
	I0709 16:59:37.996511   29482 main.go:141] libmachine: (ha-528154-m04) DBG | domain ha-528154-m04 has defined IP address 192.168.39.145 and MAC address 52:54:00:10:9f:30 in network mk-ha-528154
	I0709 16:59:37.996642   29482 main.go:141] libmachine: (ha-528154-m04) Calling .GetSSHPort
	I0709 16:59:37.996848   29482 main.go:141] libmachine: (ha-528154-m04) Calling .GetSSHKeyPath
	I0709 16:59:37.996963   29482 main.go:141] libmachine: (ha-528154-m04) Calling .GetSSHUsername
	I0709 16:59:37.997110   29482 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19199-7540/.minikube/machines/ha-528154-m04/id_rsa Username:docker}
	I0709 16:59:38.077430   29482 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0709 16:59:38.096814   29482 status.go:257] ha-528154-m04 status: &{Name:ha-528154-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (28.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-528154 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p ha-528154 node start m02 -v=7 --alsologtostderr: (27.950726693s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-528154 status -v=7 --alsologtostderr
E0709 17:00:06.702519   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/functional-192943/client.crt: no such file or directory
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (28.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (202.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-528154 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-528154 -v=7 --alsologtostderr
E0709 17:00:31.911323   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/addons-470383/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-linux-amd64 stop -p ha-528154 -v=7 --alsologtostderr: (41.740159859s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-528154 --wait=true -v=7 --alsologtostderr
E0709 17:01:28.623656   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/functional-192943/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-528154 --wait=true -v=7 --alsologtostderr: (2m40.410673973s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-528154
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (202.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (8.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-528154 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-528154 node delete m03 -v=7 --alsologtostderr: (7.489136837s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-528154 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (8.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (38.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-528154 stop -v=7 --alsologtostderr
E0709 17:03:44.779218   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/functional-192943/client.crt: no such file or directory
E0709 17:04:12.464017   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/functional-192943/client.crt: no such file or directory
ha_test.go:531: (dbg) Done: out/minikube-linux-amd64 -p ha-528154 stop -v=7 --alsologtostderr: (38.251978178s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-528154 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-528154 status -v=7 --alsologtostderr: exit status 7 (99.375037ms)

                                                
                                                
-- stdout --
	ha-528154
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-528154-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-528154-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0709 17:04:17.084925   31639 out.go:291] Setting OutFile to fd 1 ...
	I0709 17:04:17.085035   31639 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0709 17:04:17.085044   31639 out.go:304] Setting ErrFile to fd 2...
	I0709 17:04:17.085048   31639 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0709 17:04:17.085230   31639 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19199-7540/.minikube/bin
	I0709 17:04:17.085389   31639 out.go:298] Setting JSON to false
	I0709 17:04:17.085413   31639 mustload.go:65] Loading cluster: ha-528154
	I0709 17:04:17.085507   31639 notify.go:220] Checking for updates...
	I0709 17:04:17.085791   31639 config.go:182] Loaded profile config "ha-528154": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 17:04:17.085806   31639 status.go:255] checking status of ha-528154 ...
	I0709 17:04:17.086144   31639 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0709 17:04:17.086210   31639 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0709 17:04:17.104818   31639 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34731
	I0709 17:04:17.105308   31639 main.go:141] libmachine: () Calling .GetVersion
	I0709 17:04:17.105870   31639 main.go:141] libmachine: Using API Version  1
	I0709 17:04:17.105897   31639 main.go:141] libmachine: () Calling .SetConfigRaw
	I0709 17:04:17.106349   31639 main.go:141] libmachine: () Calling .GetMachineName
	I0709 17:04:17.106594   31639 main.go:141] libmachine: (ha-528154) Calling .GetState
	I0709 17:04:17.107959   31639 status.go:330] ha-528154 host status = "Stopped" (err=<nil>)
	I0709 17:04:17.107972   31639 status.go:343] host is not running, skipping remaining checks
	I0709 17:04:17.107978   31639 status.go:257] ha-528154 status: &{Name:ha-528154 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0709 17:04:17.108027   31639 status.go:255] checking status of ha-528154-m02 ...
	I0709 17:04:17.108342   31639 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0709 17:04:17.108379   31639 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0709 17:04:17.122244   31639 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46327
	I0709 17:04:17.122601   31639 main.go:141] libmachine: () Calling .GetVersion
	I0709 17:04:17.122985   31639 main.go:141] libmachine: Using API Version  1
	I0709 17:04:17.123004   31639 main.go:141] libmachine: () Calling .SetConfigRaw
	I0709 17:04:17.123287   31639 main.go:141] libmachine: () Calling .GetMachineName
	I0709 17:04:17.123427   31639 main.go:141] libmachine: (ha-528154-m02) Calling .GetState
	I0709 17:04:17.124871   31639 status.go:330] ha-528154-m02 host status = "Stopped" (err=<nil>)
	I0709 17:04:17.124884   31639 status.go:343] host is not running, skipping remaining checks
	I0709 17:04:17.124890   31639 status.go:257] ha-528154-m02 status: &{Name:ha-528154-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0709 17:04:17.124914   31639 status.go:255] checking status of ha-528154-m04 ...
	I0709 17:04:17.125218   31639 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0709 17:04:17.125255   31639 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0709 17:04:17.139230   31639 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44799
	I0709 17:04:17.139859   31639 main.go:141] libmachine: () Calling .GetVersion
	I0709 17:04:17.140425   31639 main.go:141] libmachine: Using API Version  1
	I0709 17:04:17.140445   31639 main.go:141] libmachine: () Calling .SetConfigRaw
	I0709 17:04:17.140743   31639 main.go:141] libmachine: () Calling .GetMachineName
	I0709 17:04:17.140909   31639 main.go:141] libmachine: (ha-528154-m04) Calling .GetState
	I0709 17:04:17.142506   31639 status.go:330] ha-528154-m04 host status = "Stopped" (err=<nil>)
	I0709 17:04:17.142529   31639 status.go:343] host is not running, skipping remaining checks
	I0709 17:04:17.142535   31639 status.go:257] ha-528154-m04 status: &{Name:ha-528154-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (38.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (170.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-528154 --wait=true -v=7 --alsologtostderr --driver=kvm2 
E0709 17:05:31.910791   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/addons-470383/client.crt: no such file or directory
E0709 17:06:54.955932   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/addons-470383/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-528154 --wait=true -v=7 --alsologtostderr --driver=kvm2 : (2m50.180266768s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-528154 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (170.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (79.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-528154 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-528154 --control-plane -v=7 --alsologtostderr: (1m19.127188877s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-528154 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (79.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.53s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (49.78s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-398373 --driver=kvm2 
E0709 17:08:44.781041   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/functional-192943/client.crt: no such file or directory
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-398373 --driver=kvm2 : (49.775236313s)
--- PASS: TestImageBuild/serial/Setup (49.78s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.55s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-398373
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-398373: (1.547988998s)
--- PASS: TestImageBuild/serial/NormalBuild (1.55s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.94s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-398373
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.94s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.37s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-398373
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.37s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.3s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-398373
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.30s)

                                                
                                    
x
+
TestJSONOutput/start/Command (64.41s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-811955 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-811955 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 : (1m4.405009629s)
--- PASS: TestJSONOutput/start/Command (64.41s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.59s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-811955 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.59s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.57s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-811955 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.57s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (13.34s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-811955 --output=json --user=testUser
E0709 17:10:31.911224   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/addons-470383/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-811955 --output=json --user=testUser: (13.33523887s)
--- PASS: TestJSONOutput/stop/Command (13.34s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.18s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-986029 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-986029 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (58.891391ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a885011a-5762-4537-80c1-581902948d3c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-986029] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a60d6705-e829-42d8-a7ac-b5b93bbb9509","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19199"}}
	{"specversion":"1.0","id":"f180662b-4d94-4ca9-b8e4-9687a8d5bd0e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"16179c0c-d4ce-4251-99c3-38f3ac8ebcbd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19199-7540/kubeconfig"}}
	{"specversion":"1.0","id":"0216257c-015d-4cbb-af94-48606293a0b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19199-7540/.minikube"}}
	{"specversion":"1.0","id":"2a9dff1a-fbd5-4956-a4ac-7b5c7b0fe4fb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"f346fa0c-b090-438e-8f99-068731415395","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"21764577-e589-4bfb-bd9c-fcac717bf1ad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-986029" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-986029
--- PASS: TestErrorJSONOutput (0.18s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (100.52s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-016610 --driver=kvm2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-016610 --driver=kvm2 : (47.27034902s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-019334 --driver=kvm2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-019334 --driver=kvm2 : (50.501544565s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-016610
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-019334
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-019334" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-019334
helpers_test.go:175: Cleaning up "first-016610" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-016610
--- PASS: TestMinikubeProfile (100.52s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (30.32s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-872545 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-872545 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 : (29.321324274s)
--- PASS: TestMountStart/serial/StartWithMountFirst (30.32s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-872545 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-872545 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (27.49s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-884462 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-884462 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 : (26.484097726s)
--- PASS: TestMountStart/serial/StartWithMountSecond (27.49s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-884462 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-884462 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.72s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-872545 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-884462 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-884462 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-884462
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-884462: (2.272335787s)
--- PASS: TestMountStart/serial/Stop (2.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (26.23s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-884462
E0709 17:13:44.781142   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/functional-192943/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-884462: (25.233025957s)
--- PASS: TestMountStart/serial/RestartStopped (26.23s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-884462 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-884462 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (118.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-839362 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 
E0709 17:15:07.825244   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/functional-192943/client.crt: no such file or directory
E0709 17:15:31.910372   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/addons-470383/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-839362 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 : (1m57.707347263s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-839362 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (118.10s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-839362 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-839362 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-839362 -- rollout status deployment/busybox: (2.446932637s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-839362 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-839362 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-839362 -- exec busybox-fc5497c4f-8vw6v -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-839362 -- exec busybox-fc5497c4f-vmjhq -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-839362 -- exec busybox-fc5497c4f-8vw6v -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-839362 -- exec busybox-fc5497c4f-vmjhq -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-839362 -- exec busybox-fc5497c4f-8vw6v -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-839362 -- exec busybox-fc5497c4f-vmjhq -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.01s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-839362 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-839362 -- exec busybox-fc5497c4f-8vw6v -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-839362 -- exec busybox-fc5497c4f-8vw6v -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-839362 -- exec busybox-fc5497c4f-vmjhq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-839362 -- exec busybox-fc5497c4f-vmjhq -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.82s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (48.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-839362 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-839362 -v 3 --alsologtostderr: (47.937500912s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-839362 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (48.48s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-839362 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.20s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-839362 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-839362 cp testdata/cp-test.txt multinode-839362:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-839362 ssh -n multinode-839362 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-839362 cp multinode-839362:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile598550687/001/cp-test_multinode-839362.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-839362 ssh -n multinode-839362 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-839362 cp multinode-839362:/home/docker/cp-test.txt multinode-839362-m02:/home/docker/cp-test_multinode-839362_multinode-839362-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-839362 ssh -n multinode-839362 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-839362 ssh -n multinode-839362-m02 "sudo cat /home/docker/cp-test_multinode-839362_multinode-839362-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-839362 cp multinode-839362:/home/docker/cp-test.txt multinode-839362-m03:/home/docker/cp-test_multinode-839362_multinode-839362-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-839362 ssh -n multinode-839362 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-839362 ssh -n multinode-839362-m03 "sudo cat /home/docker/cp-test_multinode-839362_multinode-839362-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-839362 cp testdata/cp-test.txt multinode-839362-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-839362 ssh -n multinode-839362-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-839362 cp multinode-839362-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile598550687/001/cp-test_multinode-839362-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-839362 ssh -n multinode-839362-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-839362 cp multinode-839362-m02:/home/docker/cp-test.txt multinode-839362:/home/docker/cp-test_multinode-839362-m02_multinode-839362.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-839362 ssh -n multinode-839362-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-839362 ssh -n multinode-839362 "sudo cat /home/docker/cp-test_multinode-839362-m02_multinode-839362.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-839362 cp multinode-839362-m02:/home/docker/cp-test.txt multinode-839362-m03:/home/docker/cp-test_multinode-839362-m02_multinode-839362-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-839362 ssh -n multinode-839362-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-839362 ssh -n multinode-839362-m03 "sudo cat /home/docker/cp-test_multinode-839362-m02_multinode-839362-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-839362 cp testdata/cp-test.txt multinode-839362-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-839362 ssh -n multinode-839362-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-839362 cp multinode-839362-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile598550687/001/cp-test_multinode-839362-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-839362 ssh -n multinode-839362-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-839362 cp multinode-839362-m03:/home/docker/cp-test.txt multinode-839362:/home/docker/cp-test_multinode-839362-m03_multinode-839362.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-839362 ssh -n multinode-839362-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-839362 ssh -n multinode-839362 "sudo cat /home/docker/cp-test_multinode-839362-m03_multinode-839362.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-839362 cp multinode-839362-m03:/home/docker/cp-test.txt multinode-839362-m02:/home/docker/cp-test_multinode-839362-m03_multinode-839362-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-839362 ssh -n multinode-839362-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-839362 ssh -n multinode-839362-m02 "sudo cat /home/docker/cp-test_multinode-839362-m03_multinode-839362-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.95s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-839362 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-839362 node stop m03: (2.573836535s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-839362 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-839362 status: exit status 7 (411.669486ms)

                                                
                                                
-- stdout --
	multinode-839362
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-839362-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-839362-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-839362 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-839362 status --alsologtostderr: exit status 7 (415.36223ms)

                                                
                                                
-- stdout --
	multinode-839362
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-839362-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-839362-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0709 17:16:57.351592   39960 out.go:291] Setting OutFile to fd 1 ...
	I0709 17:16:57.351873   39960 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0709 17:16:57.351909   39960 out.go:304] Setting ErrFile to fd 2...
	I0709 17:16:57.351921   39960 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0709 17:16:57.352288   39960 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19199-7540/.minikube/bin
	I0709 17:16:57.352501   39960 out.go:298] Setting JSON to false
	I0709 17:16:57.352530   39960 mustload.go:65] Loading cluster: multinode-839362
	I0709 17:16:57.352620   39960 notify.go:220] Checking for updates...
	I0709 17:16:57.352912   39960 config.go:182] Loaded profile config "multinode-839362": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 17:16:57.352929   39960 status.go:255] checking status of multinode-839362 ...
	I0709 17:16:57.353292   39960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0709 17:16:57.353362   39960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0709 17:16:57.369126   39960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39355
	I0709 17:16:57.369520   39960 main.go:141] libmachine: () Calling .GetVersion
	I0709 17:16:57.370179   39960 main.go:141] libmachine: Using API Version  1
	I0709 17:16:57.370208   39960 main.go:141] libmachine: () Calling .SetConfigRaw
	I0709 17:16:57.370602   39960 main.go:141] libmachine: () Calling .GetMachineName
	I0709 17:16:57.370847   39960 main.go:141] libmachine: (multinode-839362) Calling .GetState
	I0709 17:16:57.372517   39960 status.go:330] multinode-839362 host status = "Running" (err=<nil>)
	I0709 17:16:57.372541   39960 host.go:66] Checking if "multinode-839362" exists ...
	I0709 17:16:57.372841   39960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0709 17:16:57.372887   39960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0709 17:16:57.387538   39960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45397
	I0709 17:16:57.387889   39960 main.go:141] libmachine: () Calling .GetVersion
	I0709 17:16:57.388414   39960 main.go:141] libmachine: Using API Version  1
	I0709 17:16:57.388444   39960 main.go:141] libmachine: () Calling .SetConfigRaw
	I0709 17:16:57.388729   39960 main.go:141] libmachine: () Calling .GetMachineName
	I0709 17:16:57.388912   39960 main.go:141] libmachine: (multinode-839362) Calling .GetIP
	I0709 17:16:57.391637   39960 main.go:141] libmachine: (multinode-839362) DBG | domain multinode-839362 has defined MAC address 52:54:00:44:39:cb in network mk-multinode-839362
	I0709 17:16:57.392142   39960 main.go:141] libmachine: (multinode-839362) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:39:cb", ip: ""} in network mk-multinode-839362: {Iface:virbr1 ExpiryTime:2024-07-09 18:14:09 +0000 UTC Type:0 Mac:52:54:00:44:39:cb Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:multinode-839362 Clientid:01:52:54:00:44:39:cb}
	I0709 17:16:57.392178   39960 main.go:141] libmachine: (multinode-839362) DBG | domain multinode-839362 has defined IP address 192.168.39.54 and MAC address 52:54:00:44:39:cb in network mk-multinode-839362
	I0709 17:16:57.392330   39960 host.go:66] Checking if "multinode-839362" exists ...
	I0709 17:16:57.392747   39960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0709 17:16:57.392850   39960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0709 17:16:57.407864   39960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34259
	I0709 17:16:57.408295   39960 main.go:141] libmachine: () Calling .GetVersion
	I0709 17:16:57.408887   39960 main.go:141] libmachine: Using API Version  1
	I0709 17:16:57.408904   39960 main.go:141] libmachine: () Calling .SetConfigRaw
	I0709 17:16:57.409252   39960 main.go:141] libmachine: () Calling .GetMachineName
	I0709 17:16:57.409448   39960 main.go:141] libmachine: (multinode-839362) Calling .DriverName
	I0709 17:16:57.409668   39960 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0709 17:16:57.409703   39960 main.go:141] libmachine: (multinode-839362) Calling .GetSSHHostname
	I0709 17:16:57.412512   39960 main.go:141] libmachine: (multinode-839362) DBG | domain multinode-839362 has defined MAC address 52:54:00:44:39:cb in network mk-multinode-839362
	I0709 17:16:57.413015   39960 main.go:141] libmachine: (multinode-839362) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:39:cb", ip: ""} in network mk-multinode-839362: {Iface:virbr1 ExpiryTime:2024-07-09 18:14:09 +0000 UTC Type:0 Mac:52:54:00:44:39:cb Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:multinode-839362 Clientid:01:52:54:00:44:39:cb}
	I0709 17:16:57.413040   39960 main.go:141] libmachine: (multinode-839362) DBG | domain multinode-839362 has defined IP address 192.168.39.54 and MAC address 52:54:00:44:39:cb in network mk-multinode-839362
	I0709 17:16:57.413188   39960 main.go:141] libmachine: (multinode-839362) Calling .GetSSHPort
	I0709 17:16:57.413350   39960 main.go:141] libmachine: (multinode-839362) Calling .GetSSHKeyPath
	I0709 17:16:57.413546   39960 main.go:141] libmachine: (multinode-839362) Calling .GetSSHUsername
	I0709 17:16:57.413679   39960 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19199-7540/.minikube/machines/multinode-839362/id_rsa Username:docker}
	I0709 17:16:57.496006   39960 ssh_runner.go:195] Run: systemctl --version
	I0709 17:16:57.508768   39960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0709 17:16:57.523725   39960 kubeconfig.go:125] found "multinode-839362" server: "https://192.168.39.54:8443"
	I0709 17:16:57.523754   39960 api_server.go:166] Checking apiserver status ...
	I0709 17:16:57.523789   39960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0709 17:16:57.539529   39960 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1855/cgroup
	W0709 17:16:57.549459   39960 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1855/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0709 17:16:57.549512   39960 ssh_runner.go:195] Run: ls
	I0709 17:16:57.553799   39960 api_server.go:253] Checking apiserver healthz at https://192.168.39.54:8443/healthz ...
	I0709 17:16:57.557685   39960 api_server.go:279] https://192.168.39.54:8443/healthz returned 200:
	ok
	I0709 17:16:57.557702   39960 status.go:422] multinode-839362 apiserver status = Running (err=<nil>)
	I0709 17:16:57.557712   39960 status.go:257] multinode-839362 status: &{Name:multinode-839362 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0709 17:16:57.557737   39960 status.go:255] checking status of multinode-839362-m02 ...
	I0709 17:16:57.558039   39960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0709 17:16:57.558074   39960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0709 17:16:57.573139   39960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33631
	I0709 17:16:57.573531   39960 main.go:141] libmachine: () Calling .GetVersion
	I0709 17:16:57.573982   39960 main.go:141] libmachine: Using API Version  1
	I0709 17:16:57.574001   39960 main.go:141] libmachine: () Calling .SetConfigRaw
	I0709 17:16:57.574279   39960 main.go:141] libmachine: () Calling .GetMachineName
	I0709 17:16:57.574504   39960 main.go:141] libmachine: (multinode-839362-m02) Calling .GetState
	I0709 17:16:57.575869   39960 status.go:330] multinode-839362-m02 host status = "Running" (err=<nil>)
	I0709 17:16:57.575891   39960 host.go:66] Checking if "multinode-839362-m02" exists ...
	I0709 17:16:57.576180   39960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0709 17:16:57.576209   39960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0709 17:16:57.590862   39960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45655
	I0709 17:16:57.591280   39960 main.go:141] libmachine: () Calling .GetVersion
	I0709 17:16:57.591784   39960 main.go:141] libmachine: Using API Version  1
	I0709 17:16:57.591803   39960 main.go:141] libmachine: () Calling .SetConfigRaw
	I0709 17:16:57.592117   39960 main.go:141] libmachine: () Calling .GetMachineName
	I0709 17:16:57.592295   39960 main.go:141] libmachine: (multinode-839362-m02) Calling .GetIP
	I0709 17:16:57.594963   39960 main.go:141] libmachine: (multinode-839362-m02) DBG | domain multinode-839362-m02 has defined MAC address 52:54:00:ad:d3:65 in network mk-multinode-839362
	I0709 17:16:57.595404   39960 main.go:141] libmachine: (multinode-839362-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:d3:65", ip: ""} in network mk-multinode-839362: {Iface:virbr1 ExpiryTime:2024-07-09 18:15:23 +0000 UTC Type:0 Mac:52:54:00:ad:d3:65 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:multinode-839362-m02 Clientid:01:52:54:00:ad:d3:65}
	I0709 17:16:57.595425   39960 main.go:141] libmachine: (multinode-839362-m02) DBG | domain multinode-839362-m02 has defined IP address 192.168.39.211 and MAC address 52:54:00:ad:d3:65 in network mk-multinode-839362
	I0709 17:16:57.595584   39960 host.go:66] Checking if "multinode-839362-m02" exists ...
	I0709 17:16:57.595914   39960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0709 17:16:57.595963   39960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0709 17:16:57.610815   39960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34309
	I0709 17:16:57.611212   39960 main.go:141] libmachine: () Calling .GetVersion
	I0709 17:16:57.611615   39960 main.go:141] libmachine: Using API Version  1
	I0709 17:16:57.611636   39960 main.go:141] libmachine: () Calling .SetConfigRaw
	I0709 17:16:57.611934   39960 main.go:141] libmachine: () Calling .GetMachineName
	I0709 17:16:57.612161   39960 main.go:141] libmachine: (multinode-839362-m02) Calling .DriverName
	I0709 17:16:57.612356   39960 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0709 17:16:57.612382   39960 main.go:141] libmachine: (multinode-839362-m02) Calling .GetSSHHostname
	I0709 17:16:57.614974   39960 main.go:141] libmachine: (multinode-839362-m02) DBG | domain multinode-839362-m02 has defined MAC address 52:54:00:ad:d3:65 in network mk-multinode-839362
	I0709 17:16:57.615440   39960 main.go:141] libmachine: (multinode-839362-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:d3:65", ip: ""} in network mk-multinode-839362: {Iface:virbr1 ExpiryTime:2024-07-09 18:15:23 +0000 UTC Type:0 Mac:52:54:00:ad:d3:65 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:multinode-839362-m02 Clientid:01:52:54:00:ad:d3:65}
	I0709 17:16:57.615475   39960 main.go:141] libmachine: (multinode-839362-m02) DBG | domain multinode-839362-m02 has defined IP address 192.168.39.211 and MAC address 52:54:00:ad:d3:65 in network mk-multinode-839362
	I0709 17:16:57.615675   39960 main.go:141] libmachine: (multinode-839362-m02) Calling .GetSSHPort
	I0709 17:16:57.615831   39960 main.go:141] libmachine: (multinode-839362-m02) Calling .GetSSHKeyPath
	I0709 17:16:57.615976   39960 main.go:141] libmachine: (multinode-839362-m02) Calling .GetSSHUsername
	I0709 17:16:57.616094   39960 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19199-7540/.minikube/machines/multinode-839362-m02/id_rsa Username:docker}
	I0709 17:16:57.691204   39960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0709 17:16:57.704875   39960 status.go:257] multinode-839362-m02 status: &{Name:multinode-839362-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0709 17:16:57.704929   39960 status.go:255] checking status of multinode-839362-m03 ...
	I0709 17:16:57.705323   39960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0709 17:16:57.705365   39960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0709 17:16:57.721047   39960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35915
	I0709 17:16:57.721463   39960 main.go:141] libmachine: () Calling .GetVersion
	I0709 17:16:57.721829   39960 main.go:141] libmachine: Using API Version  1
	I0709 17:16:57.721848   39960 main.go:141] libmachine: () Calling .SetConfigRaw
	I0709 17:16:57.722064   39960 main.go:141] libmachine: () Calling .GetMachineName
	I0709 17:16:57.722224   39960 main.go:141] libmachine: (multinode-839362-m03) Calling .GetState
	I0709 17:16:57.723685   39960 status.go:330] multinode-839362-m03 host status = "Stopped" (err=<nil>)
	I0709 17:16:57.723699   39960 status.go:343] host is not running, skipping remaining checks
	I0709 17:16:57.723705   39960 status.go:257] multinode-839362-m03 status: &{Name:multinode-839362-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.40s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (31.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-839362 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-839362 node start m03 -v=7 --alsologtostderr: (31.328258788s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-839362 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (31.93s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (235.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-839362
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-839362
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-839362: (28.108780694s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-839362 --wait=true -v=8 --alsologtostderr
E0709 17:18:44.780227   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/functional-192943/client.crt: no such file or directory
E0709 17:20:31.910919   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/addons-470383/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-839362 --wait=true -v=8 --alsologtostderr: (3m27.210190045s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-839362
--- PASS: TestMultiNode/serial/RestartKeepsNodes (235.40s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-839362 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-839362 node delete m03: (1.889146144s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-839362 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.41s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (25.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-839362 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-839362 stop: (25.004247613s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-839362 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-839362 status: exit status 7 (79.179891ms)

                                                
                                                
-- stdout --
	multinode-839362
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-839362-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-839362 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-839362 status --alsologtostderr: exit status 7 (82.97737ms)

                                                
                                                
-- stdout --
	multinode-839362
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-839362-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0709 17:21:52.600779   42306 out.go:291] Setting OutFile to fd 1 ...
	I0709 17:21:52.601005   42306 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0709 17:21:52.601015   42306 out.go:304] Setting ErrFile to fd 2...
	I0709 17:21:52.601021   42306 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0709 17:21:52.601221   42306 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19199-7540/.minikube/bin
	I0709 17:21:52.601389   42306 out.go:298] Setting JSON to false
	I0709 17:21:52.601418   42306 mustload.go:65] Loading cluster: multinode-839362
	I0709 17:21:52.601548   42306 notify.go:220] Checking for updates...
	I0709 17:21:52.601843   42306 config.go:182] Loaded profile config "multinode-839362": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 17:21:52.601859   42306 status.go:255] checking status of multinode-839362 ...
	I0709 17:21:52.602236   42306 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0709 17:21:52.602302   42306 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0709 17:21:52.621801   42306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32951
	I0709 17:21:52.622246   42306 main.go:141] libmachine: () Calling .GetVersion
	I0709 17:21:52.622809   42306 main.go:141] libmachine: Using API Version  1
	I0709 17:21:52.622828   42306 main.go:141] libmachine: () Calling .SetConfigRaw
	I0709 17:21:52.623172   42306 main.go:141] libmachine: () Calling .GetMachineName
	I0709 17:21:52.623366   42306 main.go:141] libmachine: (multinode-839362) Calling .GetState
	I0709 17:21:52.625100   42306 status.go:330] multinode-839362 host status = "Stopped" (err=<nil>)
	I0709 17:21:52.625113   42306 status.go:343] host is not running, skipping remaining checks
	I0709 17:21:52.625119   42306 status.go:257] multinode-839362 status: &{Name:multinode-839362 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0709 17:21:52.625136   42306 status.go:255] checking status of multinode-839362-m02 ...
	I0709 17:21:52.625466   42306 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0709 17:21:52.625504   42306 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0709 17:21:52.639952   42306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40581
	I0709 17:21:52.640376   42306 main.go:141] libmachine: () Calling .GetVersion
	I0709 17:21:52.640772   42306 main.go:141] libmachine: Using API Version  1
	I0709 17:21:52.640790   42306 main.go:141] libmachine: () Calling .SetConfigRaw
	I0709 17:21:52.641127   42306 main.go:141] libmachine: () Calling .GetMachineName
	I0709 17:21:52.641301   42306 main.go:141] libmachine: (multinode-839362-m02) Calling .GetState
	I0709 17:21:52.642795   42306 status.go:330] multinode-839362-m02 host status = "Stopped" (err=<nil>)
	I0709 17:21:52.642808   42306 status.go:343] host is not running, skipping remaining checks
	I0709 17:21:52.642815   42306 status.go:257] multinode-839362-m02 status: &{Name:multinode-839362-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (25.17s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (88.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-839362 --wait=true -v=8 --alsologtostderr --driver=kvm2 
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-839362 --wait=true -v=8 --alsologtostderr --driver=kvm2 : (1m27.737612935s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-839362 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (88.29s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (52.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-839362
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-839362-m02 --driver=kvm2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-839362-m02 --driver=kvm2 : exit status 14 (61.464469ms)

                                                
                                                
-- stdout --
	* [multinode-839362-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19199
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19199-7540/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19199-7540/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-839362-m02' is duplicated with machine name 'multinode-839362-m02' in profile 'multinode-839362'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-839362-m03 --driver=kvm2 
E0709 17:23:34.956407   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/addons-470383/client.crt: no such file or directory
E0709 17:23:44.780295   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/functional-192943/client.crt: no such file or directory
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-839362-m03 --driver=kvm2 : (50.869277598s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-839362
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-839362: exit status 80 (215.620957ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-839362 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-839362-m03 already exists in multinode-839362-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-839362-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-839362-m03: (1.009908229s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (52.20s)

                                                
                                    
x
+
TestPreload (316.64s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-564470 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4
E0709 17:25:31.910348   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/addons-470383/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-564470 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4: (4m10.330024723s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-564470 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-564470 image pull gcr.io/k8s-minikube/busybox: (1.198317657s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-564470
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-564470: (13.297496835s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-564470 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 
E0709 17:28:44.779333   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/functional-192943/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-564470 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 : (50.765649163s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-564470 image list
helpers_test.go:175: Cleaning up "test-preload-564470" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-564470
--- PASS: TestPreload (316.64s)

                                                
                                    
x
+
TestScheduledStopUnix (119.8s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-813470 --memory=2048 --driver=kvm2 
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-813470 --memory=2048 --driver=kvm2 : (48.27631027s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-813470 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-813470 -n scheduled-stop-813470
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-813470 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-813470 --cancel-scheduled
E0709 17:30:31.911359   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/addons-470383/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-813470 -n scheduled-stop-813470
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-813470
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-813470 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-813470
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-813470: exit status 7 (64.417377ms)

                                                
                                                
-- stdout --
	scheduled-stop-813470
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-813470 -n scheduled-stop-813470
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-813470 -n scheduled-stop-813470: exit status 7 (55.625315ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-813470" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-813470
--- PASS: TestScheduledStopUnix (119.80s)

                                                
                                    
x
+
TestSkaffold (148.23s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe862428971 version
skaffold_test.go:63: skaffold version: v2.12.0
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-323186 --memory=2600 --driver=kvm2 
E0709 17:31:47.826380   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/functional-192943/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-323186 --memory=2600 --driver=kvm2 : (54.163766519s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/KVM_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe862428971 run --minikube-profile skaffold-323186 --kube-context skaffold-323186 --status-check=true --port-forward=false --interactive=false
E0709 17:33:44.780527   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/functional-192943/client.crt: no such file or directory
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe862428971 run --minikube-profile skaffold-323186 --kube-context skaffold-323186 --status-check=true --port-forward=false --interactive=false: (1m21.228209991s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-69d5dbcd67-nbs9n" [fd1e669c-365a-487b-873d-ac010339cb88] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.004210548s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-58895796c6-vqgms" [02bfcd3b-9131-4efd-83f6-247c51a2874a] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.003715806s
helpers_test.go:175: Cleaning up "skaffold-323186" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-323186
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-323186: (1.18627917s)
--- PASS: TestSkaffold (148.23s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (174.15s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.394301551 start -p running-upgrade-880806 --memory=2200 --vm-driver=kvm2 
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.394301551 start -p running-upgrade-880806 --memory=2200 --vm-driver=kvm2 : (1m57.216224978s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-880806 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-880806 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (55.434305772s)
helpers_test.go:175: Cleaning up "running-upgrade-880806" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-880806
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-880806: (1.169009753s)
--- PASS: TestRunningBinaryUpgrade (174.15s)

                                                
                                    
x
+
TestKubernetesUpgrade (162.75s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-494799 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-494799 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2 : (1m19.103091576s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-494799
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-494799: (3.2986091s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-494799 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-494799 status --format={{.Host}}: exit status 7 (68.115025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-494799 --memory=2200 --kubernetes-version=v1.30.2 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-494799 --memory=2200 --kubernetes-version=v1.30.2 --alsologtostderr -v=1 --driver=kvm2 : (42.66716592s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-494799 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-494799 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2 
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-494799 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2 : exit status 106 (77.499402ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-494799] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19199
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19199-7540/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19199-7540/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-494799
	    minikube start -p kubernetes-upgrade-494799 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4947992 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.2, by running:
	    
	    minikube start -p kubernetes-upgrade-494799 --kubernetes-version=v1.30.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-494799 --memory=2200 --kubernetes-version=v1.30.2 --alsologtostderr -v=1 --driver=kvm2 
E0709 17:39:28.116000   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/skaffold-323186/client.crt: no such file or directory
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-494799 --memory=2200 --kubernetes-version=v1.30.2 --alsologtostderr -v=1 --driver=kvm2 : (36.220184677s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-494799" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-494799
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-494799: (1.263202556s)
--- PASS: TestKubernetesUpgrade (162.75s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.34s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.34s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (253.14s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2942709183 start -p stopped-upgrade-953575 --memory=2200 --vm-driver=kvm2 
version_upgrade_test.go:183: (dbg) Non-zero exit: /tmp/minikube-v1.26.0.2942709183 start -p stopped-upgrade-953575 --memory=2200 --vm-driver=kvm2 : exit status 90 (1m14.380650421s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-953575] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19199
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19199-7540/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/tmp/legacy_kubeconfig1377115139
	* Using the kvm2 driver based on user configuration
	* Starting control plane node stopped-upgrade-953575 in cluster stopped-upgrade-953575
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to RUNTIME_ENABLE: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-07-09T17:39:10Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/cri-dockerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2942709183 start -p stopped-upgrade-953575 --memory=2200 --vm-driver=kvm2 
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2942709183 start -p stopped-upgrade-953575 --memory=2200 --vm-driver=kvm2 : (1m5.05315374s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2942709183 -p stopped-upgrade-953575 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2942709183 -p stopped-upgrade-953575 stop: (12.634147631s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-953575 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
E0709 17:40:31.911352   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/addons-470383/client.crt: no such file or directory
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-953575 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (1m39.700424759s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (253.14s)

                                                
                                    
x
+
TestPause/serial/Start (105.98s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-190839 --memory=2048 --install-addons=false --wait=all --driver=kvm2 
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-190839 --memory=2048 --install-addons=false --wait=all --driver=kvm2 : (1m45.981258353s)
--- PASS: TestPause/serial/Start (105.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-195919 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-195919 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 : exit status 14 (79.140779ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-195919] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19199
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19199-7540/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19199-7540/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (78.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-195919 --driver=kvm2 
E0709 17:40:09.076227   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/skaffold-323186/client.crt: no such file or directory
E0709 17:40:14.957370   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/addons-470383/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-195919 --driver=kvm2 : (1m18.004151074s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-195919 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (78.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (141.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-376121 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-376121 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 : (2m21.825635378s)
--- PASS: TestNetworkPlugins/group/auto/Start (141.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (35.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-195919 --no-kubernetes --driver=kvm2 
E0709 17:41:30.996672   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/skaffold-323186/client.crt: no such file or directory
E0709 17:41:34.134905   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/gvisor-789900/client.crt: no such file or directory
E0709 17:41:34.140212   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/gvisor-789900/client.crt: no such file or directory
E0709 17:41:34.150526   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/gvisor-789900/client.crt: no such file or directory
E0709 17:41:34.170903   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/gvisor-789900/client.crt: no such file or directory
E0709 17:41:34.212081   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/gvisor-789900/client.crt: no such file or directory
E0709 17:41:34.292492   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/gvisor-789900/client.crt: no such file or directory
E0709 17:41:34.452683   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/gvisor-789900/client.crt: no such file or directory
E0709 17:41:34.773328   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/gvisor-789900/client.crt: no such file or directory
E0709 17:41:35.414393   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/gvisor-789900/client.crt: no such file or directory
E0709 17:41:36.695497   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/gvisor-789900/client.crt: no such file or directory
E0709 17:41:39.256293   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/gvisor-789900/client.crt: no such file or directory
E0709 17:41:44.377192   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/gvisor-789900/client.crt: no such file or directory
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-195919 --no-kubernetes --driver=kvm2 : (33.563180792s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-195919 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-195919 status -o json: exit status 2 (257.261385ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-195919","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-195919
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-195919: (1.280197686s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (35.10s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (51.5s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-190839 --alsologtostderr -v=1 --driver=kvm2 
E0709 17:41:54.617615   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/gvisor-789900/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-190839 --alsologtostderr -v=1 --driver=kvm2 : (51.472908084s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (51.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (31.79s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-195919 --no-kubernetes --driver=kvm2 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-195919 --no-kubernetes --driver=kvm2 : (31.790843351s)
--- PASS: TestNoKubernetes/serial/Start (31.79s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.05s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-953575
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-953575: (1.045959745s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (89.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-376121 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 
E0709 17:42:15.098498   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/gvisor-789900/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-376121 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 : (1m29.150576624s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (89.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-195919 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-195919 "sudo systemctl is-active --quiet service kubelet": exit status 1 (200.521922ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-195919
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-195919: (2.430678171s)
--- PASS: TestNoKubernetes/serial/Stop (2.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (44.62s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-195919 --driver=kvm2 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-195919 --driver=kvm2 : (44.624162836s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (44.62s)

                                                
                                    
x
+
TestPause/serial/Pause (0.58s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-190839 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.58s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.26s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-190839 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-190839 --output=json --layout=cluster: exit status 2 (256.45409ms)

                                                
                                                
-- stdout --
	{"Name":"pause-190839","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-190839","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.26s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.55s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-190839 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.55s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.72s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-190839 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.72s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.04s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-190839 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-190839 --alsologtostderr -v=5: (1.04347412s)
--- PASS: TestPause/serial/DeletePaused (1.04s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.27s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (127.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-376121 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-376121 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 : (2m7.857669211s)
--- PASS: TestNetworkPlugins/group/calico/Start (127.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-376121 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-376121 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-vmcs5" [f49fda13-fedc-4519-ab06-2d96653a47ba] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-vmcs5" [f49fda13-fedc-4519-ab06-2d96653a47ba] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004147068s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-376121 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-376121 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-376121 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (113.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-376121 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-376121 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 : (1m53.144560457s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (113.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-195919 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-195919 "sudo systemctl is-active --quiet service kubelet": exit status 1 (191.572765ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (123.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-376121 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-376121 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 : (2m3.088781342s)
--- PASS: TestNetworkPlugins/group/false/Start (123.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-b6svn" [a29f8d11-0081-46b4-95ff-72a1aa372fe4] Running
E0709 17:43:44.780121   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/functional-192943/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004049246s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-376121 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (13.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-376121 replace --force -f testdata/netcat-deployment.yaml
E0709 17:43:47.154378   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/skaffold-323186/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-bf92k" [36e39d2a-c9ce-4470-81e7-24ffa4f2cf91] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-bf92k" [36e39d2a-c9ce-4470-81e7-24ffa4f2cf91] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.004468262s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (13.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-376121 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-376121 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-376121 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (127.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-376121 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 
E0709 17:44:17.979720   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/gvisor-789900/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-376121 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 : (2m7.499807903s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (127.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-4qjhd" [11f8cc2f-e31d-44b7-8c09-30fd279814db] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.00677514s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-376121 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-376121 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-t8vr5" [89be6dc8-91c0-4515-84c5-752f04c4b727] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-t8vr5" [89be6dc8-91c0-4515-84c5-752f04c4b727] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.00474893s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-376121 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-376121 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-xsctj" [d6624bd9-5365-4db2-895f-6309f5e9f05e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-xsctj" [d6624bd9-5365-4db2-895f-6309f5e9f05e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.005516034s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-376121 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-376121 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-376121 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-376121 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-376121 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-376121 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-376121 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (12.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-376121 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-274lb" [d6cbf3be-7701-421d-981d-cbf67d66054f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-274lb" [d6cbf3be-7701-421d-981d-cbf67d66054f] Running
E0709 17:45:31.910740   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/addons-470383/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 12.00401213s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (12.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (80.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-376121 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-376121 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 : (1m20.420415146s)
--- PASS: TestNetworkPlugins/group/flannel/Start (80.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (94.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-376121 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-376121 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 : (1m34.890197643s)
--- PASS: TestNetworkPlugins/group/bridge/Start (94.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-376121 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-376121 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-376121 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (105.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-376121 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-376121 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 : (1m45.748877325s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (105.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-376121 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-376121 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-rxq58" [35332994-cc88-4ba6-89cf-c9a07f295a03] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-rxq58" [35332994-cc88-4ba6-89cf-c9a07f295a03] Running
E0709 17:46:34.135555   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/gvisor-789900/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.004110621s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-376121 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-376121 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-376121 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-nmtzl" [f8b8e686-70ed-4caa-8aae-c0170d342ff3] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.006281701s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (147.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-496320 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-496320 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0: (2m27.444621106s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (147.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-376121 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-376121 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-gkcjl" [b4d0fbdd-d20d-459c-b988-6aaba70e81ae] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-gkcjl" [b4d0fbdd-d20d-459c-b988-6aaba70e81ae] Running
E0709 17:47:01.819890   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/gvisor-789900/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.005472293s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-376121 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-376121 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-376121 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-376121 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-376121 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-49h7t" [2167f258-c8f5-4e64-a5ac-cc359d85e4bd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-49h7t" [2167f258-c8f5-4e64-a5ac-cc359d85e4bd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.004804795s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-376121 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-376121 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-376121 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (119.84s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-840186 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.30.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-840186 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.30.2: (1m59.837403014s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (119.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-376121 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (10.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-376121 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-bxsfl" [c7dd6f6a-7b9f-4874-9528-8eb100384c81] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-bxsfl" [c7dd6f6a-7b9f-4874-9528-8eb100384c81] Running
E0709 17:47:43.524424   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/auto-376121/client.crt: no such file or directory
E0709 17:47:43.684863   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/auto-376121/client.crt: no such file or directory
E0709 17:47:44.005317   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/auto-376121/client.crt: no such file or directory
E0709 17:47:44.645570   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/auto-376121/client.crt: no such file or directory
E0709 17:47:45.925832   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/auto-376121/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 10.003587742s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (10.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (119.83s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-087236 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.30.2
E0709 17:47:43.368017   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/auto-376121/client.crt: no such file or directory
E0709 17:47:43.373282   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/auto-376121/client.crt: no such file or directory
E0709 17:47:43.383591   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/auto-376121/client.crt: no such file or directory
E0709 17:47:43.403916   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/auto-376121/client.crt: no such file or directory
E0709 17:47:43.444226   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/auto-376121/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-087236 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.30.2: (1m59.831637056s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (119.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-376121 exec deployment/netcat -- nslookup kubernetes.default
E0709 17:47:48.486263   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/auto-376121/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-376121 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-376121 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (125.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-684406 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.30.2
E0709 17:48:24.328535   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/auto-376121/client.crt: no such file or directory
E0709 17:48:27.826569   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/functional-192943/client.crt: no such file or directory
E0709 17:48:40.376496   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/kindnet-376121/client.crt: no such file or directory
E0709 17:48:40.381821   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/kindnet-376121/client.crt: no such file or directory
E0709 17:48:40.392129   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/kindnet-376121/client.crt: no such file or directory
E0709 17:48:40.412507   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/kindnet-376121/client.crt: no such file or directory
E0709 17:48:40.452832   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/kindnet-376121/client.crt: no such file or directory
E0709 17:48:40.533216   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/kindnet-376121/client.crt: no such file or directory
E0709 17:48:40.693551   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/kindnet-376121/client.crt: no such file or directory
E0709 17:48:41.014104   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/kindnet-376121/client.crt: no such file or directory
E0709 17:48:41.654263   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/kindnet-376121/client.crt: no such file or directory
E0709 17:48:42.935090   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/kindnet-376121/client.crt: no such file or directory
E0709 17:48:44.779411   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/functional-192943/client.crt: no such file or directory
E0709 17:48:45.495874   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/kindnet-376121/client.crt: no such file or directory
E0709 17:48:47.153851   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/skaffold-323186/client.crt: no such file or directory
E0709 17:48:50.616473   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/kindnet-376121/client.crt: no such file or directory
E0709 17:49:00.856935   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/kindnet-376121/client.crt: no such file or directory
E0709 17:49:05.288984   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/auto-376121/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-684406 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.30.2: (2m5.059600382s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (125.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-496320 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2730d5ab-1835-476d-ac91-b745f0772e2a] Pending
helpers_test.go:344: "busybox" [2730d5ab-1835-476d-ac91-b745f0772e2a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0709 17:49:21.337988   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/kindnet-376121/client.crt: no such file or directory
helpers_test.go:344: "busybox" [2730d5ab-1835-476d-ac91-b745f0772e2a] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.009307422s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-496320 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.48s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-840186 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3e840cec-bf2e-46af-b607-c4d201c32ba4] Pending
helpers_test.go:344: "busybox" [3e840cec-bf2e-46af-b607-c4d201c32ba4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [3e840cec-bf2e-46af-b607-c4d201c32ba4] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.00517829s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-840186 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.97s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-496320 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-496320 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.97s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-496320 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-496320 --alsologtostderr -v=3: (13.342626735s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-840186 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-840186 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.023694703s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-840186 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (13.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-840186 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-840186 --alsologtostderr -v=3: (13.40257312s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (13.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-087236 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [15e9a7ae-23b3-4ec6-b120-1113fdf06116] Pending
helpers_test.go:344: "busybox" [15e9a7ae-23b3-4ec6-b120-1113fdf06116] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [15e9a7ae-23b3-4ec6-b120-1113fdf06116] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004662242s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-087236 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-496320 -n old-k8s-version-496320
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-496320 -n old-k8s-version-496320: exit status 7 (63.719151ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-496320 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (400.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-496320 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-496320 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0: (6m40.02000951s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-496320 -n old-k8s-version-496320
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (400.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-840186 -n no-preload-840186
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-840186 -n no-preload-840186: exit status 7 (63.710288ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-840186 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (330.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-840186 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.30.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-840186 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.30.2: (5m29.841159532s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-840186 -n no-preload-840186
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (330.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-087236 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-087236 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (13.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-087236 --alsologtostderr -v=3
E0709 17:49:49.971310   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/calico-376121/client.crt: no such file or directory
E0709 17:49:49.976611   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/calico-376121/client.crt: no such file or directory
E0709 17:49:49.986963   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/calico-376121/client.crt: no such file or directory
E0709 17:49:50.007266   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/calico-376121/client.crt: no such file or directory
E0709 17:49:50.047574   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/calico-376121/client.crt: no such file or directory
E0709 17:49:50.127903   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/calico-376121/client.crt: no such file or directory
E0709 17:49:50.288281   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/calico-376121/client.crt: no such file or directory
E0709 17:49:50.609252   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/calico-376121/client.crt: no such file or directory
E0709 17:49:51.250324   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/calico-376121/client.crt: no such file or directory
E0709 17:49:52.530644   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/calico-376121/client.crt: no such file or directory
E0709 17:49:55.091665   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/calico-376121/client.crt: no such file or directory
E0709 17:50:00.212234   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/calico-376121/client.crt: no such file or directory
E0709 17:50:02.299056   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/kindnet-376121/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-087236 --alsologtostderr -v=3: (13.363913882s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (13.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-087236 -n embed-certs-087236
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-087236 -n embed-certs-087236: exit status 7 (62.263177ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-087236 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (347.77s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-087236 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.30.2
E0709 17:50:03.162869   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/custom-flannel-376121/client.crt: no such file or directory
E0709 17:50:03.168110   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/custom-flannel-376121/client.crt: no such file or directory
E0709 17:50:03.178360   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/custom-flannel-376121/client.crt: no such file or directory
E0709 17:50:03.198608   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/custom-flannel-376121/client.crt: no such file or directory
E0709 17:50:03.238864   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/custom-flannel-376121/client.crt: no such file or directory
E0709 17:50:03.319190   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/custom-flannel-376121/client.crt: no such file or directory
E0709 17:50:03.480347   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/custom-flannel-376121/client.crt: no such file or directory
E0709 17:50:03.800477   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/custom-flannel-376121/client.crt: no such file or directory
E0709 17:50:04.441333   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/custom-flannel-376121/client.crt: no such file or directory
E0709 17:50:05.722474   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/custom-flannel-376121/client.crt: no such file or directory
E0709 17:50:08.282698   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/custom-flannel-376121/client.crt: no such file or directory
E0709 17:50:10.453019   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/calico-376121/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-087236 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.30.2: (5m47.502439311s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-087236 -n embed-certs-087236
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (347.77s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-684406 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [bb89c7ec-104e-4f16-b7fb-8d2cf9c7befd] Pending
helpers_test.go:344: "busybox" [bb89c7ec-104e-4f16-b7fb-8d2cf9c7befd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0709 17:50:13.402900   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/custom-flannel-376121/client.crt: no such file or directory
helpers_test.go:344: "busybox" [bb89c7ec-104e-4f16-b7fb-8d2cf9c7befd] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003539236s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-684406 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-684406 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-684406 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (13.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-684406 --alsologtostderr -v=3
E0709 17:50:23.643585   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/custom-flannel-376121/client.crt: no such file or directory
E0709 17:50:24.494284   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/false-376121/client.crt: no such file or directory
E0709 17:50:24.499548   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/false-376121/client.crt: no such file or directory
E0709 17:50:24.509799   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/false-376121/client.crt: no such file or directory
E0709 17:50:24.530059   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/false-376121/client.crt: no such file or directory
E0709 17:50:24.570952   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/false-376121/client.crt: no such file or directory
E0709 17:50:24.651710   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/false-376121/client.crt: no such file or directory
E0709 17:50:24.812806   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/false-376121/client.crt: no such file or directory
E0709 17:50:25.133323   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/false-376121/client.crt: no such file or directory
E0709 17:50:25.773926   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/false-376121/client.crt: no such file or directory
E0709 17:50:27.054379   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/false-376121/client.crt: no such file or directory
E0709 17:50:27.209349   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/auto-376121/client.crt: no such file or directory
E0709 17:50:29.615565   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/false-376121/client.crt: no such file or directory
E0709 17:50:30.933815   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/calico-376121/client.crt: no such file or directory
E0709 17:50:31.910510   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/addons-470383/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-684406 --alsologtostderr -v=3: (13.315968926s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (13.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-684406 -n default-k8s-diff-port-684406
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-684406 -n default-k8s-diff-port-684406: exit status 7 (81.529161ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-684406 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (325.49s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-684406 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.30.2
E0709 17:50:34.736232   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/false-376121/client.crt: no such file or directory
E0709 17:50:44.123915   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/custom-flannel-376121/client.crt: no such file or directory
E0709 17:50:44.976522   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/false-376121/client.crt: no such file or directory
E0709 17:51:05.456904   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/false-376121/client.crt: no such file or directory
E0709 17:51:11.894603   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/calico-376121/client.crt: no such file or directory
E0709 17:51:24.201568   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/enable-default-cni-376121/client.crt: no such file or directory
E0709 17:51:24.206832   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/enable-default-cni-376121/client.crt: no such file or directory
E0709 17:51:24.217117   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/enable-default-cni-376121/client.crt: no such file or directory
E0709 17:51:24.219305   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/kindnet-376121/client.crt: no such file or directory
E0709 17:51:24.237521   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/enable-default-cni-376121/client.crt: no such file or directory
E0709 17:51:24.277801   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/enable-default-cni-376121/client.crt: no such file or directory
E0709 17:51:24.358083   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/enable-default-cni-376121/client.crt: no such file or directory
E0709 17:51:24.518599   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/enable-default-cni-376121/client.crt: no such file or directory
E0709 17:51:24.839015   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/enable-default-cni-376121/client.crt: no such file or directory
E0709 17:51:25.084257   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/custom-flannel-376121/client.crt: no such file or directory
E0709 17:51:25.479970   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/enable-default-cni-376121/client.crt: no such file or directory
E0709 17:51:26.760228   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/enable-default-cni-376121/client.crt: no such file or directory
E0709 17:51:29.320656   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/enable-default-cni-376121/client.crt: no such file or directory
E0709 17:51:34.135917   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/gvisor-789900/client.crt: no such file or directory
E0709 17:51:34.441453   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/enable-default-cni-376121/client.crt: no such file or directory
E0709 17:51:44.682268   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/enable-default-cni-376121/client.crt: no such file or directory
E0709 17:51:46.417859   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/false-376121/client.crt: no such file or directory
E0709 17:51:46.893373   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/flannel-376121/client.crt: no such file or directory
E0709 17:51:46.898637   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/flannel-376121/client.crt: no such file or directory
E0709 17:51:46.908912   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/flannel-376121/client.crt: no such file or directory
E0709 17:51:46.929192   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/flannel-376121/client.crt: no such file or directory
E0709 17:51:46.969473   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/flannel-376121/client.crt: no such file or directory
E0709 17:51:47.049798   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/flannel-376121/client.crt: no such file or directory
E0709 17:51:47.210301   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/flannel-376121/client.crt: no such file or directory
E0709 17:51:47.530850   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/flannel-376121/client.crt: no such file or directory
E0709 17:51:48.171527   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/flannel-376121/client.crt: no such file or directory
E0709 17:51:49.451673   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/flannel-376121/client.crt: no such file or directory
E0709 17:51:52.012719   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/flannel-376121/client.crt: no such file or directory
E0709 17:51:57.133373   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/flannel-376121/client.crt: no such file or directory
E0709 17:52:05.162636   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/enable-default-cni-376121/client.crt: no such file or directory
E0709 17:52:07.373697   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/flannel-376121/client.crt: no such file or directory
E0709 17:52:08.788702   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/bridge-376121/client.crt: no such file or directory
E0709 17:52:08.793949   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/bridge-376121/client.crt: no such file or directory
E0709 17:52:08.804220   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/bridge-376121/client.crt: no such file or directory
E0709 17:52:08.824466   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/bridge-376121/client.crt: no such file or directory
E0709 17:52:08.864937   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/bridge-376121/client.crt: no such file or directory
E0709 17:52:08.945399   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/bridge-376121/client.crt: no such file or directory
E0709 17:52:09.105812   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/bridge-376121/client.crt: no such file or directory
E0709 17:52:09.426063   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/bridge-376121/client.crt: no such file or directory
E0709 17:52:10.066715   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/bridge-376121/client.crt: no such file or directory
E0709 17:52:11.346881   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/bridge-376121/client.crt: no such file or directory
E0709 17:52:13.907229   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/bridge-376121/client.crt: no such file or directory
E0709 17:52:19.028099   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/bridge-376121/client.crt: no such file or directory
E0709 17:52:27.854667   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/flannel-376121/client.crt: no such file or directory
E0709 17:52:29.269070   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/bridge-376121/client.crt: no such file or directory
E0709 17:52:33.815231   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/calico-376121/client.crt: no such file or directory
E0709 17:52:38.457042   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/kubenet-376121/client.crt: no such file or directory
E0709 17:52:38.462261   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/kubenet-376121/client.crt: no such file or directory
E0709 17:52:38.472516   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/kubenet-376121/client.crt: no such file or directory
E0709 17:52:38.492787   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/kubenet-376121/client.crt: no such file or directory
E0709 17:52:38.533085   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/kubenet-376121/client.crt: no such file or directory
E0709 17:52:38.613422   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/kubenet-376121/client.crt: no such file or directory
E0709 17:52:38.773866   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/kubenet-376121/client.crt: no such file or directory
E0709 17:52:39.094554   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/kubenet-376121/client.crt: no such file or directory
E0709 17:52:39.734752   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/kubenet-376121/client.crt: no such file or directory
E0709 17:52:41.015412   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/kubenet-376121/client.crt: no such file or directory
E0709 17:52:43.368238   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/auto-376121/client.crt: no such file or directory
E0709 17:52:43.576606   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/kubenet-376121/client.crt: no such file or directory
E0709 17:52:46.123774   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/enable-default-cni-376121/client.crt: no such file or directory
E0709 17:52:47.005190   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/custom-flannel-376121/client.crt: no such file or directory
E0709 17:52:48.697334   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/kubenet-376121/client.crt: no such file or directory
E0709 17:52:49.749854   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/bridge-376121/client.crt: no such file or directory
E0709 17:52:58.938145   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/kubenet-376121/client.crt: no such file or directory
E0709 17:53:08.338970   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/false-376121/client.crt: no such file or directory
E0709 17:53:08.814853   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/flannel-376121/client.crt: no such file or directory
E0709 17:53:11.050389   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/auto-376121/client.crt: no such file or directory
E0709 17:53:19.418964   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/kubenet-376121/client.crt: no such file or directory
E0709 17:53:30.710007   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/bridge-376121/client.crt: no such file or directory
E0709 17:53:40.376424   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/kindnet-376121/client.crt: no such file or directory
E0709 17:53:44.780243   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/functional-192943/client.crt: no such file or directory
E0709 17:53:47.153743   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/skaffold-323186/client.crt: no such file or directory
E0709 17:54:00.379932   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/kubenet-376121/client.crt: no such file or directory
E0709 17:54:08.044415   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/enable-default-cni-376121/client.crt: no such file or directory
E0709 17:54:08.059608   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/kindnet-376121/client.crt: no such file or directory
E0709 17:54:30.735497   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/flannel-376121/client.crt: no such file or directory
E0709 17:54:49.970610   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/calico-376121/client.crt: no such file or directory
E0709 17:54:52.630794   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/bridge-376121/client.crt: no such file or directory
E0709 17:55:03.163300   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/custom-flannel-376121/client.crt: no such file or directory
E0709 17:55:10.198676   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/skaffold-323186/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-684406 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.30.2: (5m25.114785501s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-684406 -n default-k8s-diff-port-684406
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (325.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (13.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-cqj52" [4823e5dc-ea56-4572-9614-b7eb7e3b2396] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0709 17:55:17.656211   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/calico-376121/client.crt: no such file or directory
E0709 17:55:22.300291   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/kubenet-376121/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-779776cb65-cqj52" [4823e5dc-ea56-4572-9614-b7eb7e3b2396] Running
E0709 17:55:24.494779   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/false-376121/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 13.005396964s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (13.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-cqj52" [4823e5dc-ea56-4572-9614-b7eb7e3b2396] Running
E0709 17:55:30.845508   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/custom-flannel-376121/client.crt: no such file or directory
E0709 17:55:31.910883   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/addons-470383/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004395295s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-840186 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-840186 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.64s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-840186 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-840186 -n no-preload-840186
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-840186 -n no-preload-840186: exit status 2 (252.408551ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-840186 -n no-preload-840186
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-840186 -n no-preload-840186: exit status 2 (242.862287ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-840186 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-840186 -n no-preload-840186
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-840186 -n no-preload-840186
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.64s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (71.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-205750 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.30.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-205750 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.30.2: (1m11.29088348s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (71.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-wm4bv" [da5831d8-7b55-44c9-bceb-396c2407906c] Running
E0709 17:55:52.179182   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/false-376121/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003653359s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-wm4bv" [da5831d8-7b55-44c9-bceb-396c2407906c] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004945311s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-087236 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (11.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-45rc9" [0d616fa0-1d4d-4e0f-850a-a4ea34f4f678] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-779776cb65-45rc9" [0d616fa0-1d4d-4e0f-850a-a4ea34f4f678] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.030141007s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (11.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-087236 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.76s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-087236 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-087236 -n embed-certs-087236
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-087236 -n embed-certs-087236: exit status 2 (246.26053ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-087236 -n embed-certs-087236
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-087236 -n embed-certs-087236: exit status 2 (252.157571ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-087236 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-087236 -n embed-certs-087236
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-087236 -n embed-certs-087236
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.76s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-45rc9" [0d616fa0-1d4d-4e0f-850a-a4ea34f4f678] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005469032s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-684406 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-684406 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.53s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-684406 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-684406 -n default-k8s-diff-port-684406
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-684406 -n default-k8s-diff-port-684406: exit status 2 (229.772083ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-684406 -n default-k8s-diff-port-684406
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-684406 -n default-k8s-diff-port-684406: exit status 2 (242.67564ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-684406 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-684406 -n default-k8s-diff-port-684406
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-684406 -n default-k8s-diff-port-684406
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.53s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-9xwnf" [2b1b3199-b59d-49b0-9c25-e9720da61864] Running
E0709 17:56:24.200918   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/enable-default-cni-376121/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004895094s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-9xwnf" [2b1b3199-b59d-49b0-9c25-e9720da61864] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004746141s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-496320 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
E0709 17:56:34.135636   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/gvisor-789900/client.crt: no such file or directory
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-496320 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-496320 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-496320 -n old-k8s-version-496320
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-496320 -n old-k8s-version-496320: exit status 2 (237.364652ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-496320 -n old-k8s-version-496320
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-496320 -n old-k8s-version-496320: exit status 2 (242.106708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-496320 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-496320 -n old-k8s-version-496320
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-496320 -n old-k8s-version-496320
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.36s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.81s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-205750 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.81s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (13.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-205750 --alsologtostderr -v=3
E0709 17:56:51.885069   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/enable-default-cni-376121/client.crt: no such file or directory
E0709 17:56:54.958169   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/addons-470383/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-205750 --alsologtostderr -v=3: (13.314551977s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (13.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-205750 -n newest-cni-205750
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-205750 -n newest-cni-205750: exit status 7 (64.902997ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-205750 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (36.91s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-205750 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.30.2
E0709 17:57:08.789032   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/bridge-376121/client.crt: no such file or directory
E0709 17:57:14.575989   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/flannel-376121/client.crt: no such file or directory
E0709 17:57:36.471875   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/bridge-376121/client.crt: no such file or directory
E0709 17:57:38.457106   14701 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19199-7540/.minikube/profiles/kubenet-376121/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-205750 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.30.2: (36.657838505s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-205750 -n newest-cni-205750
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (36.91s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-205750 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-205750 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-205750 -n newest-cni-205750
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-205750 -n newest-cni-205750: exit status 2 (224.199176ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-205750 -n newest-cni-205750
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-205750 -n newest-cni-205750: exit status 2 (229.195749ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-205750 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-205750 -n newest-cni-205750
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-205750 -n newest-cni-205750
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.11s)

                                                
                                    

Test skip (31/341)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-376121 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-376121

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-376121

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-376121

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-376121

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-376121

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-376121

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-376121

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-376121

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-376121

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-376121

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-376121" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376121"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-376121" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376121"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-376121" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376121"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-376121

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-376121" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376121"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-376121" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376121"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-376121" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-376121" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-376121" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-376121" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-376121" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-376121" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-376121" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-376121" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-376121" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376121"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-376121" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376121"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-376121" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376121"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-376121" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376121"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-376121" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376121"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-376121

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-376121

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-376121" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-376121" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-376121

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-376121

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-376121" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-376121" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-376121" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-376121" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-376121" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-376121" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376121"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-376121" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376121"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-376121" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376121"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-376121" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376121"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-376121" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376121"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-376121

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-376121" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376121"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-376121" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376121"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-376121" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376121"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-376121" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376121"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-376121" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376121"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-376121" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376121"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-376121" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376121"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-376121" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376121"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-376121" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376121"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-376121" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376121"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-376121" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376121"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-376121" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376121"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-376121" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376121"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-376121" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376121"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-376121" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376121"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-376121" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376121"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-376121" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376121"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-376121" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376121"

                                                
                                                
----------------------- debugLogs end: cilium-376121 [took: 2.893693546s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-376121" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-376121
--- SKIP: TestNetworkPlugins/group/cilium (3.03s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-657267" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-657267
--- SKIP: TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                    
Copied to clipboard