Test Report: KVM_Linux 19636

                    
                      a6feba20ebb4dc887776b248ea5c810d31cc7846:2024-09-13:36198
                    
                

Test fail (1/340)

Order failed test Duration
33 TestAddons/parallel/Registry 73.54
x
+
TestAddons/parallel/Registry (73.54s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 2.689391ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-d5lpq" [8c735e84-f4ab-4bf1-aad6-c5a4d187b69d] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.002780834s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-9sz2r" [52fdf99f-a086-42c5-88fc-da0c47c197d1] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004226791s
addons_test.go:338: (dbg) Run:  kubectl --context addons-084503 delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context addons-084503 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context addons-084503 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.084482664s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-084503 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: (dbg) Run:  out/minikube-linux-amd64 -p addons-084503 ip
2024/09/13 18:34:19 [DEBUG] GET http://192.168.39.228:5000
addons_test.go:386: (dbg) Run:  out/minikube-linux-amd64 -p addons-084503 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-084503 -n addons-084503
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-084503 logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-601425                                                                     | download-only-601425 | jenkins | v1.34.0 | 13 Sep 24 18:20 UTC | 13 Sep 24 18:20 UTC |
	| delete  | -p download-only-504525                                                                     | download-only-504525 | jenkins | v1.34.0 | 13 Sep 24 18:20 UTC | 13 Sep 24 18:20 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-322824 | jenkins | v1.34.0 | 13 Sep 24 18:20 UTC |                     |
	|         | binary-mirror-322824                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:34697                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-322824                                                                     | binary-mirror-322824 | jenkins | v1.34.0 | 13 Sep 24 18:20 UTC | 13 Sep 24 18:20 UTC |
	| addons  | enable dashboard -p                                                                         | addons-084503        | jenkins | v1.34.0 | 13 Sep 24 18:20 UTC |                     |
	|         | addons-084503                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-084503        | jenkins | v1.34.0 | 13 Sep 24 18:20 UTC |                     |
	|         | addons-084503                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-084503 --wait=true                                                                | addons-084503        | jenkins | v1.34.0 | 13 Sep 24 18:20 UTC | 13 Sep 24 18:24 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2  --addons=ingress                                                             |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	| addons  | addons-084503 addons disable                                                                | addons-084503        | jenkins | v1.34.0 | 13 Sep 24 18:24 UTC | 13 Sep 24 18:25 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-084503 addons disable                                                                | addons-084503        | jenkins | v1.34.0 | 13 Sep 24 18:33 UTC | 13 Sep 24 18:33 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | addons-084503 addons                                                                        | addons-084503        | jenkins | v1.34.0 | 13 Sep 24 18:33 UTC | 13 Sep 24 18:33 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-084503        | jenkins | v1.34.0 | 13 Sep 24 18:33 UTC | 13 Sep 24 18:33 UTC |
	|         | -p addons-084503                                                                            |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-084503        | jenkins | v1.34.0 | 13 Sep 24 18:33 UTC | 13 Sep 24 18:33 UTC |
	|         | addons-084503                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-084503        | jenkins | v1.34.0 | 13 Sep 24 18:33 UTC | 13 Sep 24 18:33 UTC |
	|         | -p addons-084503                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-084503 ssh cat                                                                       | addons-084503        | jenkins | v1.34.0 | 13 Sep 24 18:33 UTC | 13 Sep 24 18:33 UTC |
	|         | /opt/local-path-provisioner/pvc-a311ee20-76c9-43bb-aa4f-017e3c6d3a8c_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-084503 addons disable                                                                | addons-084503        | jenkins | v1.34.0 | 13 Sep 24 18:33 UTC | 13 Sep 24 18:34 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-084503 addons disable                                                                | addons-084503        | jenkins | v1.34.0 | 13 Sep 24 18:33 UTC | 13 Sep 24 18:33 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-084503 addons                                                                        | addons-084503        | jenkins | v1.34.0 | 13 Sep 24 18:33 UTC | 13 Sep 24 18:33 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-084503        | jenkins | v1.34.0 | 13 Sep 24 18:33 UTC | 13 Sep 24 18:33 UTC |
	|         | addons-084503                                                                               |                      |         |         |                     |                     |
	| addons  | addons-084503 addons                                                                        | addons-084503        | jenkins | v1.34.0 | 13 Sep 24 18:33 UTC | 13 Sep 24 18:33 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-084503 ssh curl -s                                                                   | addons-084503        | jenkins | v1.34.0 | 13 Sep 24 18:34 UTC | 13 Sep 24 18:34 UTC |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ip      | addons-084503 ip                                                                            | addons-084503        | jenkins | v1.34.0 | 13 Sep 24 18:34 UTC | 13 Sep 24 18:34 UTC |
	| addons  | addons-084503 addons disable                                                                | addons-084503        | jenkins | v1.34.0 | 13 Sep 24 18:34 UTC | 13 Sep 24 18:34 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-084503 addons disable                                                                | addons-084503        | jenkins | v1.34.0 | 13 Sep 24 18:34 UTC | 13 Sep 24 18:34 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| ip      | addons-084503 ip                                                                            | addons-084503        | jenkins | v1.34.0 | 13 Sep 24 18:34 UTC | 13 Sep 24 18:34 UTC |
	| addons  | addons-084503 addons disable                                                                | addons-084503        | jenkins | v1.34.0 | 13 Sep 24 18:34 UTC | 13 Sep 24 18:34 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/13 18:20:48
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0913 18:20:48.258893   11676 out.go:345] Setting OutFile to fd 1 ...
	I0913 18:20:48.258993   11676 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:20:48.259000   11676 out.go:358] Setting ErrFile to fd 2...
	I0913 18:20:48.259005   11676 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:20:48.259159   11676 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-3886/.minikube/bin
	I0913 18:20:48.259712   11676 out.go:352] Setting JSON to false
	I0913 18:20:48.260488   11676 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":195,"bootTime":1726251453,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0913 18:20:48.260611   11676 start.go:139] virtualization: kvm guest
	I0913 18:20:48.262930   11676 out.go:177] * [addons-084503] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0913 18:20:48.264249   11676 out.go:177]   - MINIKUBE_LOCATION=19636
	I0913 18:20:48.264309   11676 notify.go:220] Checking for updates...
	I0913 18:20:48.266554   11676 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 18:20:48.267752   11676 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19636-3886/kubeconfig
	I0913 18:20:48.268992   11676 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-3886/.minikube
	I0913 18:20:48.270037   11676 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0913 18:20:48.271330   11676 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 18:20:48.272938   11676 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 18:20:48.305482   11676 out.go:177] * Using the kvm2 driver based on user configuration
	I0913 18:20:48.306605   11676 start.go:297] selected driver: kvm2
	I0913 18:20:48.306617   11676 start.go:901] validating driver "kvm2" against <nil>
	I0913 18:20:48.306629   11676 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 18:20:48.307310   11676 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 18:20:48.307378   11676 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19636-3886/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0913 18:20:48.322507   11676 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0913 18:20:48.322565   11676 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0913 18:20:48.322800   11676 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 18:20:48.322829   11676 cni.go:84] Creating CNI manager for ""
	I0913 18:20:48.322866   11676 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0913 18:20:48.322876   11676 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0913 18:20:48.322918   11676 start.go:340] cluster config:
	{Name:addons-084503 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-084503 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:d
ocker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 18:20:48.323002   11676 iso.go:125] acquiring lock: {Name:mk12ab92f890170906f67f3ca706a4ea8b0bad2f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 18:20:48.324808   11676 out.go:177] * Starting "addons-084503" primary control-plane node in "addons-084503" cluster
	I0913 18:20:48.325884   11676 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 18:20:48.325913   11676 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19636-3886/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0913 18:20:48.325920   11676 cache.go:56] Caching tarball of preloaded images
	I0913 18:20:48.325978   11676 preload.go:172] Found /home/jenkins/minikube-integration/19636-3886/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0913 18:20:48.325988   11676 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0913 18:20:48.326278   11676 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/addons-084503/config.json ...
	I0913 18:20:48.326300   11676 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/addons-084503/config.json: {Name:mkfce5fa4bfae83ca3e187d5a6fe05a7fb0f2770 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:20:48.326457   11676 start.go:360] acquireMachinesLock for addons-084503: {Name:mk69bff7e3efaf92c687650aa6237529b7079a42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 18:20:48.326503   11676 start.go:364] duration metric: took 31.872µs to acquireMachinesLock for "addons-084503"
	I0913 18:20:48.326520   11676 start.go:93] Provisioning new machine with config: &{Name:addons-084503 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-084503 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 18:20:48.326573   11676 start.go:125] createHost starting for "" (driver="kvm2")
	I0913 18:20:48.328049   11676 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0913 18:20:48.328170   11676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0913 18:20:48.328204   11676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:20:48.342600   11676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41169
	I0913 18:20:48.343079   11676 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:20:48.343737   11676 main.go:141] libmachine: Using API Version  1
	I0913 18:20:48.343761   11676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:20:48.344102   11676 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:20:48.344284   11676 main.go:141] libmachine: (addons-084503) Calling .GetMachineName
	I0913 18:20:48.344442   11676 main.go:141] libmachine: (addons-084503) Calling .DriverName
	I0913 18:20:48.344569   11676 start.go:159] libmachine.API.Create for "addons-084503" (driver="kvm2")
	I0913 18:20:48.344596   11676 client.go:168] LocalClient.Create starting
	I0913 18:20:48.344632   11676 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19636-3886/.minikube/certs/ca.pem
	I0913 18:20:48.452267   11676 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19636-3886/.minikube/certs/cert.pem
	I0913 18:20:48.581209   11676 main.go:141] libmachine: Running pre-create checks...
	I0913 18:20:48.581229   11676 main.go:141] libmachine: (addons-084503) Calling .PreCreateCheck
	I0913 18:20:48.581743   11676 main.go:141] libmachine: (addons-084503) Calling .GetConfigRaw
	I0913 18:20:48.582166   11676 main.go:141] libmachine: Creating machine...
	I0913 18:20:48.582180   11676 main.go:141] libmachine: (addons-084503) Calling .Create
	I0913 18:20:48.582334   11676 main.go:141] libmachine: (addons-084503) Creating KVM machine...
	I0913 18:20:48.583584   11676 main.go:141] libmachine: (addons-084503) DBG | found existing default KVM network
	I0913 18:20:48.584346   11676 main.go:141] libmachine: (addons-084503) DBG | I0913 18:20:48.584196   11698 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002211f0}
	I0913 18:20:48.584364   11676 main.go:141] libmachine: (addons-084503) DBG | created network xml: 
	I0913 18:20:48.584374   11676 main.go:141] libmachine: (addons-084503) DBG | <network>
	I0913 18:20:48.584379   11676 main.go:141] libmachine: (addons-084503) DBG |   <name>mk-addons-084503</name>
	I0913 18:20:48.584385   11676 main.go:141] libmachine: (addons-084503) DBG |   <dns enable='no'/>
	I0913 18:20:48.584393   11676 main.go:141] libmachine: (addons-084503) DBG |   
	I0913 18:20:48.584403   11676 main.go:141] libmachine: (addons-084503) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0913 18:20:48.584418   11676 main.go:141] libmachine: (addons-084503) DBG |     <dhcp>
	I0913 18:20:48.584427   11676 main.go:141] libmachine: (addons-084503) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0913 18:20:48.584435   11676 main.go:141] libmachine: (addons-084503) DBG |     </dhcp>
	I0913 18:20:48.584440   11676 main.go:141] libmachine: (addons-084503) DBG |   </ip>
	I0913 18:20:48.584444   11676 main.go:141] libmachine: (addons-084503) DBG |   
	I0913 18:20:48.584448   11676 main.go:141] libmachine: (addons-084503) DBG | </network>
	I0913 18:20:48.584452   11676 main.go:141] libmachine: (addons-084503) DBG | 
	I0913 18:20:48.589671   11676 main.go:141] libmachine: (addons-084503) DBG | trying to create private KVM network mk-addons-084503 192.168.39.0/24...
	I0913 18:20:48.653241   11676 main.go:141] libmachine: (addons-084503) DBG | private KVM network mk-addons-084503 192.168.39.0/24 created
	I0913 18:20:48.653273   11676 main.go:141] libmachine: (addons-084503) Setting up store path in /home/jenkins/minikube-integration/19636-3886/.minikube/machines/addons-084503 ...
	I0913 18:20:48.653298   11676 main.go:141] libmachine: (addons-084503) Building disk image from file:///home/jenkins/minikube-integration/19636-3886/.minikube/cache/iso/amd64/minikube-v1.34.0-1726156389-19616-amd64.iso
	I0913 18:20:48.653335   11676 main.go:141] libmachine: (addons-084503) DBG | I0913 18:20:48.653261   11698 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19636-3886/.minikube
	I0913 18:20:48.653460   11676 main.go:141] libmachine: (addons-084503) Downloading /home/jenkins/minikube-integration/19636-3886/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19636-3886/.minikube/cache/iso/amd64/minikube-v1.34.0-1726156389-19616-amd64.iso...
	I0913 18:20:48.915726   11676 main.go:141] libmachine: (addons-084503) DBG | I0913 18:20:48.915609   11698 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19636-3886/.minikube/machines/addons-084503/id_rsa...
	I0913 18:20:49.056312   11676 main.go:141] libmachine: (addons-084503) DBG | I0913 18:20:49.056174   11698 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19636-3886/.minikube/machines/addons-084503/addons-084503.rawdisk...
	I0913 18:20:49.056350   11676 main.go:141] libmachine: (addons-084503) DBG | Writing magic tar header
	I0913 18:20:49.056411   11676 main.go:141] libmachine: (addons-084503) DBG | Writing SSH key tar header
	I0913 18:20:49.056447   11676 main.go:141] libmachine: (addons-084503) DBG | I0913 18:20:49.056301   11698 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19636-3886/.minikube/machines/addons-084503 ...
	I0913 18:20:49.056466   11676 main.go:141] libmachine: (addons-084503) Setting executable bit set on /home/jenkins/minikube-integration/19636-3886/.minikube/machines/addons-084503 (perms=drwx------)
	I0913 18:20:49.056482   11676 main.go:141] libmachine: (addons-084503) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19636-3886/.minikube/machines/addons-084503
	I0913 18:20:49.056500   11676 main.go:141] libmachine: (addons-084503) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19636-3886/.minikube/machines
	I0913 18:20:49.056510   11676 main.go:141] libmachine: (addons-084503) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19636-3886/.minikube
	I0913 18:20:49.056525   11676 main.go:141] libmachine: (addons-084503) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19636-3886
	I0913 18:20:49.056539   11676 main.go:141] libmachine: (addons-084503) Setting executable bit set on /home/jenkins/minikube-integration/19636-3886/.minikube/machines (perms=drwxr-xr-x)
	I0913 18:20:49.056550   11676 main.go:141] libmachine: (addons-084503) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0913 18:20:49.056561   11676 main.go:141] libmachine: (addons-084503) Setting executable bit set on /home/jenkins/minikube-integration/19636-3886/.minikube (perms=drwxr-xr-x)
	I0913 18:20:49.056590   11676 main.go:141] libmachine: (addons-084503) Setting executable bit set on /home/jenkins/minikube-integration/19636-3886 (perms=drwxrwxr-x)
	I0913 18:20:49.056607   11676 main.go:141] libmachine: (addons-084503) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0913 18:20:49.056615   11676 main.go:141] libmachine: (addons-084503) DBG | Checking permissions on dir: /home/jenkins
	I0913 18:20:49.056636   11676 main.go:141] libmachine: (addons-084503) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0913 18:20:49.056656   11676 main.go:141] libmachine: (addons-084503) DBG | Checking permissions on dir: /home
	I0913 18:20:49.056667   11676 main.go:141] libmachine: (addons-084503) Creating domain...
	I0913 18:20:49.056694   11676 main.go:141] libmachine: (addons-084503) DBG | Skipping /home - not owner
	I0913 18:20:49.057653   11676 main.go:141] libmachine: (addons-084503) define libvirt domain using xml: 
	I0913 18:20:49.057685   11676 main.go:141] libmachine: (addons-084503) <domain type='kvm'>
	I0913 18:20:49.057695   11676 main.go:141] libmachine: (addons-084503)   <name>addons-084503</name>
	I0913 18:20:49.057703   11676 main.go:141] libmachine: (addons-084503)   <memory unit='MiB'>4000</memory>
	I0913 18:20:49.057710   11676 main.go:141] libmachine: (addons-084503)   <vcpu>2</vcpu>
	I0913 18:20:49.057714   11676 main.go:141] libmachine: (addons-084503)   <features>
	I0913 18:20:49.057720   11676 main.go:141] libmachine: (addons-084503)     <acpi/>
	I0913 18:20:49.057724   11676 main.go:141] libmachine: (addons-084503)     <apic/>
	I0913 18:20:49.057729   11676 main.go:141] libmachine: (addons-084503)     <pae/>
	I0913 18:20:49.057735   11676 main.go:141] libmachine: (addons-084503)     
	I0913 18:20:49.057740   11676 main.go:141] libmachine: (addons-084503)   </features>
	I0913 18:20:49.057747   11676 main.go:141] libmachine: (addons-084503)   <cpu mode='host-passthrough'>
	I0913 18:20:49.057751   11676 main.go:141] libmachine: (addons-084503)   
	I0913 18:20:49.057761   11676 main.go:141] libmachine: (addons-084503)   </cpu>
	I0913 18:20:49.057786   11676 main.go:141] libmachine: (addons-084503)   <os>
	I0913 18:20:49.057799   11676 main.go:141] libmachine: (addons-084503)     <type>hvm</type>
	I0913 18:20:49.057807   11676 main.go:141] libmachine: (addons-084503)     <boot dev='cdrom'/>
	I0913 18:20:49.057813   11676 main.go:141] libmachine: (addons-084503)     <boot dev='hd'/>
	I0913 18:20:49.057821   11676 main.go:141] libmachine: (addons-084503)     <bootmenu enable='no'/>
	I0913 18:20:49.057827   11676 main.go:141] libmachine: (addons-084503)   </os>
	I0913 18:20:49.057834   11676 main.go:141] libmachine: (addons-084503)   <devices>
	I0913 18:20:49.057850   11676 main.go:141] libmachine: (addons-084503)     <disk type='file' device='cdrom'>
	I0913 18:20:49.057867   11676 main.go:141] libmachine: (addons-084503)       <source file='/home/jenkins/minikube-integration/19636-3886/.minikube/machines/addons-084503/boot2docker.iso'/>
	I0913 18:20:49.057877   11676 main.go:141] libmachine: (addons-084503)       <target dev='hdc' bus='scsi'/>
	I0913 18:20:49.057883   11676 main.go:141] libmachine: (addons-084503)       <readonly/>
	I0913 18:20:49.057897   11676 main.go:141] libmachine: (addons-084503)     </disk>
	I0913 18:20:49.057906   11676 main.go:141] libmachine: (addons-084503)     <disk type='file' device='disk'>
	I0913 18:20:49.057911   11676 main.go:141] libmachine: (addons-084503)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0913 18:20:49.057921   11676 main.go:141] libmachine: (addons-084503)       <source file='/home/jenkins/minikube-integration/19636-3886/.minikube/machines/addons-084503/addons-084503.rawdisk'/>
	I0913 18:20:49.057925   11676 main.go:141] libmachine: (addons-084503)       <target dev='hda' bus='virtio'/>
	I0913 18:20:49.057930   11676 main.go:141] libmachine: (addons-084503)     </disk>
	I0913 18:20:49.057935   11676 main.go:141] libmachine: (addons-084503)     <interface type='network'>
	I0913 18:20:49.057941   11676 main.go:141] libmachine: (addons-084503)       <source network='mk-addons-084503'/>
	I0913 18:20:49.057947   11676 main.go:141] libmachine: (addons-084503)       <model type='virtio'/>
	I0913 18:20:49.057952   11676 main.go:141] libmachine: (addons-084503)     </interface>
	I0913 18:20:49.057958   11676 main.go:141] libmachine: (addons-084503)     <interface type='network'>
	I0913 18:20:49.057963   11676 main.go:141] libmachine: (addons-084503)       <source network='default'/>
	I0913 18:20:49.057967   11676 main.go:141] libmachine: (addons-084503)       <model type='virtio'/>
	I0913 18:20:49.057993   11676 main.go:141] libmachine: (addons-084503)     </interface>
	I0913 18:20:49.058016   11676 main.go:141] libmachine: (addons-084503)     <serial type='pty'>
	I0913 18:20:49.058028   11676 main.go:141] libmachine: (addons-084503)       <target port='0'/>
	I0913 18:20:49.058039   11676 main.go:141] libmachine: (addons-084503)     </serial>
	I0913 18:20:49.058060   11676 main.go:141] libmachine: (addons-084503)     <console type='pty'>
	I0913 18:20:49.058074   11676 main.go:141] libmachine: (addons-084503)       <target type='serial' port='0'/>
	I0913 18:20:49.058093   11676 main.go:141] libmachine: (addons-084503)     </console>
	I0913 18:20:49.058110   11676 main.go:141] libmachine: (addons-084503)     <rng model='virtio'>
	I0913 18:20:49.058122   11676 main.go:141] libmachine: (addons-084503)       <backend model='random'>/dev/random</backend>
	I0913 18:20:49.058133   11676 main.go:141] libmachine: (addons-084503)     </rng>
	I0913 18:20:49.058138   11676 main.go:141] libmachine: (addons-084503)     
	I0913 18:20:49.058144   11676 main.go:141] libmachine: (addons-084503)     
	I0913 18:20:49.058160   11676 main.go:141] libmachine: (addons-084503)   </devices>
	I0913 18:20:49.058175   11676 main.go:141] libmachine: (addons-084503) </domain>
	I0913 18:20:49.058188   11676 main.go:141] libmachine: (addons-084503) 
	I0913 18:20:49.063776   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined MAC address 52:54:00:d4:92:34 in network default
	I0913 18:20:49.064235   11676 main.go:141] libmachine: (addons-084503) Ensuring networks are active...
	I0913 18:20:49.064259   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:20:49.064944   11676 main.go:141] libmachine: (addons-084503) Ensuring network default is active
	I0913 18:20:49.065196   11676 main.go:141] libmachine: (addons-084503) Ensuring network mk-addons-084503 is active
	I0913 18:20:49.065617   11676 main.go:141] libmachine: (addons-084503) Getting domain xml...
	I0913 18:20:49.066220   11676 main.go:141] libmachine: (addons-084503) Creating domain...
	I0913 18:20:50.483487   11676 main.go:141] libmachine: (addons-084503) Waiting to get IP...
	I0913 18:20:50.484205   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:20:50.484687   11676 main.go:141] libmachine: (addons-084503) DBG | unable to find current IP address of domain addons-084503 in network mk-addons-084503
	I0913 18:20:50.484711   11676 main.go:141] libmachine: (addons-084503) DBG | I0913 18:20:50.484659   11698 retry.go:31] will retry after 208.385015ms: waiting for machine to come up
	I0913 18:20:50.695114   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:20:50.695461   11676 main.go:141] libmachine: (addons-084503) DBG | unable to find current IP address of domain addons-084503 in network mk-addons-084503
	I0913 18:20:50.695487   11676 main.go:141] libmachine: (addons-084503) DBG | I0913 18:20:50.695416   11698 retry.go:31] will retry after 378.06511ms: waiting for machine to come up
	I0913 18:20:51.074918   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:20:51.075266   11676 main.go:141] libmachine: (addons-084503) DBG | unable to find current IP address of domain addons-084503 in network mk-addons-084503
	I0913 18:20:51.075290   11676 main.go:141] libmachine: (addons-084503) DBG | I0913 18:20:51.075235   11698 retry.go:31] will retry after 352.278336ms: waiting for machine to come up
	I0913 18:20:51.428615   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:20:51.429009   11676 main.go:141] libmachine: (addons-084503) DBG | unable to find current IP address of domain addons-084503 in network mk-addons-084503
	I0913 18:20:51.429033   11676 main.go:141] libmachine: (addons-084503) DBG | I0913 18:20:51.428967   11698 retry.go:31] will retry after 389.319728ms: waiting for machine to come up
	I0913 18:20:51.819449   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:20:51.819799   11676 main.go:141] libmachine: (addons-084503) DBG | unable to find current IP address of domain addons-084503 in network mk-addons-084503
	I0913 18:20:51.819833   11676 main.go:141] libmachine: (addons-084503) DBG | I0913 18:20:51.819768   11698 retry.go:31] will retry after 685.789794ms: waiting for machine to come up
	I0913 18:20:52.507645   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:20:52.507988   11676 main.go:141] libmachine: (addons-084503) DBG | unable to find current IP address of domain addons-084503 in network mk-addons-084503
	I0913 18:20:52.508014   11676 main.go:141] libmachine: (addons-084503) DBG | I0913 18:20:52.507940   11698 retry.go:31] will retry after 805.166915ms: waiting for machine to come up
	I0913 18:20:53.314915   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:20:53.315306   11676 main.go:141] libmachine: (addons-084503) DBG | unable to find current IP address of domain addons-084503 in network mk-addons-084503
	I0913 18:20:53.315333   11676 main.go:141] libmachine: (addons-084503) DBG | I0913 18:20:53.315267   11698 retry.go:31] will retry after 933.070675ms: waiting for machine to come up
	I0913 18:20:54.249773   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:20:54.250177   11676 main.go:141] libmachine: (addons-084503) DBG | unable to find current IP address of domain addons-084503 in network mk-addons-084503
	I0913 18:20:54.250220   11676 main.go:141] libmachine: (addons-084503) DBG | I0913 18:20:54.250145   11698 retry.go:31] will retry after 1.126819281s: waiting for machine to come up
	I0913 18:20:55.378702   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:20:55.379019   11676 main.go:141] libmachine: (addons-084503) DBG | unable to find current IP address of domain addons-084503 in network mk-addons-084503
	I0913 18:20:55.379092   11676 main.go:141] libmachine: (addons-084503) DBG | I0913 18:20:55.378860   11698 retry.go:31] will retry after 1.804119966s: waiting for machine to come up
	I0913 18:20:57.184285   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:20:57.184602   11676 main.go:141] libmachine: (addons-084503) DBG | unable to find current IP address of domain addons-084503 in network mk-addons-084503
	I0913 18:20:57.184626   11676 main.go:141] libmachine: (addons-084503) DBG | I0913 18:20:57.184552   11698 retry.go:31] will retry after 2.192136077s: waiting for machine to come up
	I0913 18:20:59.378815   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:20:59.379280   11676 main.go:141] libmachine: (addons-084503) DBG | unable to find current IP address of domain addons-084503 in network mk-addons-084503
	I0913 18:20:59.379298   11676 main.go:141] libmachine: (addons-084503) DBG | I0913 18:20:59.379238   11698 retry.go:31] will retry after 2.487622413s: waiting for machine to come up
	I0913 18:21:01.869877   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:21:01.870303   11676 main.go:141] libmachine: (addons-084503) DBG | unable to find current IP address of domain addons-084503 in network mk-addons-084503
	I0913 18:21:01.870324   11676 main.go:141] libmachine: (addons-084503) DBG | I0913 18:21:01.870269   11698 retry.go:31] will retry after 2.305376387s: waiting for machine to come up
	I0913 18:21:04.176809   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:21:04.177243   11676 main.go:141] libmachine: (addons-084503) DBG | unable to find current IP address of domain addons-084503 in network mk-addons-084503
	I0913 18:21:04.177318   11676 main.go:141] libmachine: (addons-084503) DBG | I0913 18:21:04.177206   11698 retry.go:31] will retry after 3.889226319s: waiting for machine to come up
	I0913 18:21:08.071114   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:21:08.071555   11676 main.go:141] libmachine: (addons-084503) Found IP for machine: 192.168.39.228
	I0913 18:21:08.071621   11676 main.go:141] libmachine: (addons-084503) Reserving static IP address...
	I0913 18:21:08.071643   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has current primary IP address 192.168.39.228 and MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:21:08.071948   11676 main.go:141] libmachine: (addons-084503) DBG | unable to find host DHCP lease matching {name: "addons-084503", mac: "52:54:00:ca:28:52", ip: "192.168.39.228"} in network mk-addons-084503
	I0913 18:21:08.144859   11676 main.go:141] libmachine: (addons-084503) DBG | Getting to WaitForSSH function...
	I0913 18:21:08.144896   11676 main.go:141] libmachine: (addons-084503) Reserved static IP address: 192.168.39.228
	I0913 18:21:08.144909   11676 main.go:141] libmachine: (addons-084503) Waiting for SSH to be available...
	I0913 18:21:08.147227   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:21:08.147692   11676 main.go:141] libmachine: (addons-084503) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:28:52", ip: ""} in network mk-addons-084503: {Iface:virbr1 ExpiryTime:2024-09-13 19:21:03 +0000 UTC Type:0 Mac:52:54:00:ca:28:52 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ca:28:52}
	I0913 18:21:08.147726   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined IP address 192.168.39.228 and MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:21:08.147927   11676 main.go:141] libmachine: (addons-084503) DBG | Using SSH client type: external
	I0913 18:21:08.147947   11676 main.go:141] libmachine: (addons-084503) DBG | Using SSH private key: /home/jenkins/minikube-integration/19636-3886/.minikube/machines/addons-084503/id_rsa (-rw-------)
	I0913 18:21:08.147976   11676 main.go:141] libmachine: (addons-084503) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.228 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19636-3886/.minikube/machines/addons-084503/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0913 18:21:08.147985   11676 main.go:141] libmachine: (addons-084503) DBG | About to run SSH command:
	I0913 18:21:08.147993   11676 main.go:141] libmachine: (addons-084503) DBG | exit 0
	I0913 18:21:08.282185   11676 main.go:141] libmachine: (addons-084503) DBG | SSH cmd err, output: <nil>: 
	I0913 18:21:08.282490   11676 main.go:141] libmachine: (addons-084503) KVM machine creation complete!
	I0913 18:21:08.282792   11676 main.go:141] libmachine: (addons-084503) Calling .GetConfigRaw
	I0913 18:21:08.283340   11676 main.go:141] libmachine: (addons-084503) Calling .DriverName
	I0913 18:21:08.283536   11676 main.go:141] libmachine: (addons-084503) Calling .DriverName
	I0913 18:21:08.283661   11676 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0913 18:21:08.283674   11676 main.go:141] libmachine: (addons-084503) Calling .GetState
	I0913 18:21:08.284949   11676 main.go:141] libmachine: Detecting operating system of created instance...
	I0913 18:21:08.284962   11676 main.go:141] libmachine: Waiting for SSH to be available...
	I0913 18:21:08.284967   11676 main.go:141] libmachine: Getting to WaitForSSH function...
	I0913 18:21:08.284973   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHHostname
	I0913 18:21:08.287324   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:21:08.287683   11676 main.go:141] libmachine: (addons-084503) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:28:52", ip: ""} in network mk-addons-084503: {Iface:virbr1 ExpiryTime:2024-09-13 19:21:03 +0000 UTC Type:0 Mac:52:54:00:ca:28:52 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-084503 Clientid:01:52:54:00:ca:28:52}
	I0913 18:21:08.287716   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined IP address 192.168.39.228 and MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:21:08.287856   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHPort
	I0913 18:21:08.288061   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHKeyPath
	I0913 18:21:08.288203   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHKeyPath
	I0913 18:21:08.288319   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHUsername
	I0913 18:21:08.288426   11676 main.go:141] libmachine: Using SSH client type: native
	I0913 18:21:08.288663   11676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I0913 18:21:08.288677   11676 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0913 18:21:08.397584   11676 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 18:21:08.397608   11676 main.go:141] libmachine: Detecting the provisioner...
	I0913 18:21:08.397618   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHHostname
	I0913 18:21:08.400085   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:21:08.400430   11676 main.go:141] libmachine: (addons-084503) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:28:52", ip: ""} in network mk-addons-084503: {Iface:virbr1 ExpiryTime:2024-09-13 19:21:03 +0000 UTC Type:0 Mac:52:54:00:ca:28:52 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-084503 Clientid:01:52:54:00:ca:28:52}
	I0913 18:21:08.400453   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined IP address 192.168.39.228 and MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:21:08.400610   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHPort
	I0913 18:21:08.400806   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHKeyPath
	I0913 18:21:08.400960   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHKeyPath
	I0913 18:21:08.401090   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHUsername
	I0913 18:21:08.401212   11676 main.go:141] libmachine: Using SSH client type: native
	I0913 18:21:08.401381   11676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I0913 18:21:08.401392   11676 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0913 18:21:08.514699   11676 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0913 18:21:08.514780   11676 main.go:141] libmachine: found compatible host: buildroot
	I0913 18:21:08.514791   11676 main.go:141] libmachine: Provisioning with buildroot...
	I0913 18:21:08.514801   11676 main.go:141] libmachine: (addons-084503) Calling .GetMachineName
	I0913 18:21:08.515015   11676 buildroot.go:166] provisioning hostname "addons-084503"
	I0913 18:21:08.515037   11676 main.go:141] libmachine: (addons-084503) Calling .GetMachineName
	I0913 18:21:08.515190   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHHostname
	I0913 18:21:08.517900   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:21:08.518270   11676 main.go:141] libmachine: (addons-084503) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:28:52", ip: ""} in network mk-addons-084503: {Iface:virbr1 ExpiryTime:2024-09-13 19:21:03 +0000 UTC Type:0 Mac:52:54:00:ca:28:52 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-084503 Clientid:01:52:54:00:ca:28:52}
	I0913 18:21:08.518311   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined IP address 192.168.39.228 and MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:21:08.518516   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHPort
	I0913 18:21:08.518729   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHKeyPath
	I0913 18:21:08.518877   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHKeyPath
	I0913 18:21:08.519026   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHUsername
	I0913 18:21:08.519158   11676 main.go:141] libmachine: Using SSH client type: native
	I0913 18:21:08.519363   11676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I0913 18:21:08.519378   11676 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-084503 && echo "addons-084503" | sudo tee /etc/hostname
	I0913 18:21:08.644648   11676 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-084503
	
	I0913 18:21:08.644680   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHHostname
	I0913 18:21:08.647228   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:21:08.647633   11676 main.go:141] libmachine: (addons-084503) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:28:52", ip: ""} in network mk-addons-084503: {Iface:virbr1 ExpiryTime:2024-09-13 19:21:03 +0000 UTC Type:0 Mac:52:54:00:ca:28:52 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-084503 Clientid:01:52:54:00:ca:28:52}
	I0913 18:21:08.647664   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined IP address 192.168.39.228 and MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:21:08.647812   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHPort
	I0913 18:21:08.647987   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHKeyPath
	I0913 18:21:08.648108   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHKeyPath
	I0913 18:21:08.648231   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHUsername
	I0913 18:21:08.648359   11676 main.go:141] libmachine: Using SSH client type: native
	I0913 18:21:08.648526   11676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I0913 18:21:08.648542   11676 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-084503' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-084503/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-084503' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0913 18:21:08.766519   11676 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 18:21:08.766544   11676 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19636-3886/.minikube CaCertPath:/home/jenkins/minikube-integration/19636-3886/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19636-3886/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19636-3886/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19636-3886/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19636-3886/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19636-3886/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19636-3886/.minikube}
	I0913 18:21:08.766583   11676 buildroot.go:174] setting up certificates
	I0913 18:21:08.766595   11676 provision.go:84] configureAuth start
	I0913 18:21:08.766613   11676 main.go:141] libmachine: (addons-084503) Calling .GetMachineName
	I0913 18:21:08.766883   11676 main.go:141] libmachine: (addons-084503) Calling .GetIP
	I0913 18:21:08.769253   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:21:08.769501   11676 main.go:141] libmachine: (addons-084503) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:28:52", ip: ""} in network mk-addons-084503: {Iface:virbr1 ExpiryTime:2024-09-13 19:21:03 +0000 UTC Type:0 Mac:52:54:00:ca:28:52 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-084503 Clientid:01:52:54:00:ca:28:52}
	I0913 18:21:08.769521   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined IP address 192.168.39.228 and MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:21:08.769616   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHHostname
	I0913 18:21:08.771856   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:21:08.772206   11676 main.go:141] libmachine: (addons-084503) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:28:52", ip: ""} in network mk-addons-084503: {Iface:virbr1 ExpiryTime:2024-09-13 19:21:03 +0000 UTC Type:0 Mac:52:54:00:ca:28:52 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-084503 Clientid:01:52:54:00:ca:28:52}
	I0913 18:21:08.772236   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined IP address 192.168.39.228 and MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:21:08.772394   11676 provision.go:143] copyHostCerts
	I0913 18:21:08.772471   11676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3886/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19636-3886/.minikube/ca.pem (1082 bytes)
	I0913 18:21:08.772598   11676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3886/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19636-3886/.minikube/cert.pem (1123 bytes)
	I0913 18:21:08.772683   11676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3886/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19636-3886/.minikube/key.pem (1675 bytes)
	I0913 18:21:08.772751   11676 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19636-3886/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19636-3886/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19636-3886/.minikube/certs/ca-key.pem org=jenkins.addons-084503 san=[127.0.0.1 192.168.39.228 addons-084503 localhost minikube]
	I0913 18:21:08.967687   11676 provision.go:177] copyRemoteCerts
	I0913 18:21:08.967746   11676 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0913 18:21:08.967772   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHHostname
	I0913 18:21:08.970388   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:21:08.970744   11676 main.go:141] libmachine: (addons-084503) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:28:52", ip: ""} in network mk-addons-084503: {Iface:virbr1 ExpiryTime:2024-09-13 19:21:03 +0000 UTC Type:0 Mac:52:54:00:ca:28:52 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-084503 Clientid:01:52:54:00:ca:28:52}
	I0913 18:21:08.970773   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined IP address 192.168.39.228 and MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:21:08.970917   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHPort
	I0913 18:21:08.971107   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHKeyPath
	I0913 18:21:08.971229   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHUsername
	I0913 18:21:08.971355   11676 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3886/.minikube/machines/addons-084503/id_rsa Username:docker}
	I0913 18:21:09.056513   11676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3886/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0913 18:21:09.079774   11676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3886/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0913 18:21:09.102969   11676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3886/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0913 18:21:09.125451   11676 provision.go:87] duration metric: took 358.842982ms to configureAuth
	I0913 18:21:09.125477   11676 buildroot.go:189] setting minikube options for container-runtime
	I0913 18:21:09.125627   11676 config.go:182] Loaded profile config "addons-084503": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 18:21:09.125651   11676 main.go:141] libmachine: (addons-084503) Calling .DriverName
	I0913 18:21:09.125932   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHHostname
	I0913 18:21:09.128278   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:21:09.128626   11676 main.go:141] libmachine: (addons-084503) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:28:52", ip: ""} in network mk-addons-084503: {Iface:virbr1 ExpiryTime:2024-09-13 19:21:03 +0000 UTC Type:0 Mac:52:54:00:ca:28:52 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-084503 Clientid:01:52:54:00:ca:28:52}
	I0913 18:21:09.128660   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined IP address 192.168.39.228 and MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:21:09.128786   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHPort
	I0913 18:21:09.128967   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHKeyPath
	I0913 18:21:09.129107   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHKeyPath
	I0913 18:21:09.129238   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHUsername
	I0913 18:21:09.129388   11676 main.go:141] libmachine: Using SSH client type: native
	I0913 18:21:09.129573   11676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I0913 18:21:09.129587   11676 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0913 18:21:09.243494   11676 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0913 18:21:09.243527   11676 buildroot.go:70] root file system type: tmpfs
	I0913 18:21:09.243672   11676 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0913 18:21:09.243699   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHHostname
	I0913 18:21:09.246179   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:21:09.246546   11676 main.go:141] libmachine: (addons-084503) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:28:52", ip: ""} in network mk-addons-084503: {Iface:virbr1 ExpiryTime:2024-09-13 19:21:03 +0000 UTC Type:0 Mac:52:54:00:ca:28:52 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-084503 Clientid:01:52:54:00:ca:28:52}
	I0913 18:21:09.246574   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined IP address 192.168.39.228 and MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:21:09.246722   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHPort
	I0913 18:21:09.246889   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHKeyPath
	I0913 18:21:09.247071   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHKeyPath
	I0913 18:21:09.247209   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHUsername
	I0913 18:21:09.247378   11676 main.go:141] libmachine: Using SSH client type: native
	I0913 18:21:09.247545   11676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I0913 18:21:09.247606   11676 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0913 18:21:09.373944   11676 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0913 18:21:09.373982   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHHostname
	I0913 18:21:09.376743   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:21:09.377050   11676 main.go:141] libmachine: (addons-084503) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:28:52", ip: ""} in network mk-addons-084503: {Iface:virbr1 ExpiryTime:2024-09-13 19:21:03 +0000 UTC Type:0 Mac:52:54:00:ca:28:52 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-084503 Clientid:01:52:54:00:ca:28:52}
	I0913 18:21:09.377077   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined IP address 192.168.39.228 and MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:21:09.377276   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHPort
	I0913 18:21:09.377446   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHKeyPath
	I0913 18:21:09.377597   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHKeyPath
	I0913 18:21:09.377706   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHUsername
	I0913 18:21:09.377867   11676 main.go:141] libmachine: Using SSH client type: native
	I0913 18:21:09.378041   11676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I0913 18:21:09.378057   11676 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0913 18:21:11.488193   11676 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0913 18:21:11.488231   11676 main.go:141] libmachine: Checking connection to Docker...
	I0913 18:21:11.488242   11676 main.go:141] libmachine: (addons-084503) Calling .GetURL
	I0913 18:21:11.489574   11676 main.go:141] libmachine: (addons-084503) DBG | Using libvirt version 6000000
	I0913 18:21:11.491865   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:21:11.492293   11676 main.go:141] libmachine: (addons-084503) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:28:52", ip: ""} in network mk-addons-084503: {Iface:virbr1 ExpiryTime:2024-09-13 19:21:03 +0000 UTC Type:0 Mac:52:54:00:ca:28:52 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-084503 Clientid:01:52:54:00:ca:28:52}
	I0913 18:21:11.492320   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined IP address 192.168.39.228 and MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:21:11.492660   11676 main.go:141] libmachine: Docker is up and running!
	I0913 18:21:11.492675   11676 main.go:141] libmachine: Reticulating splines...
	I0913 18:21:11.492681   11676 client.go:171] duration metric: took 23.148077219s to LocalClient.Create
	I0913 18:21:11.492705   11676 start.go:167] duration metric: took 23.148135722s to libmachine.API.Create "addons-084503"
	I0913 18:21:11.492717   11676 start.go:293] postStartSetup for "addons-084503" (driver="kvm2")
	I0913 18:21:11.492729   11676 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0913 18:21:11.492745   11676 main.go:141] libmachine: (addons-084503) Calling .DriverName
	I0913 18:21:11.493029   11676 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0913 18:21:11.493053   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHHostname
	I0913 18:21:11.495445   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:21:11.495768   11676 main.go:141] libmachine: (addons-084503) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:28:52", ip: ""} in network mk-addons-084503: {Iface:virbr1 ExpiryTime:2024-09-13 19:21:03 +0000 UTC Type:0 Mac:52:54:00:ca:28:52 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-084503 Clientid:01:52:54:00:ca:28:52}
	I0913 18:21:11.495812   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined IP address 192.168.39.228 and MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:21:11.495946   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHPort
	I0913 18:21:11.496132   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHKeyPath
	I0913 18:21:11.496281   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHUsername
	I0913 18:21:11.496420   11676 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3886/.minikube/machines/addons-084503/id_rsa Username:docker}
	I0913 18:21:11.584345   11676 ssh_runner.go:195] Run: cat /etc/os-release
	I0913 18:21:11.588393   11676 info.go:137] Remote host: Buildroot 2023.02.9
	I0913 18:21:11.588419   11676 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3886/.minikube/addons for local assets ...
	I0913 18:21:11.588491   11676 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3886/.minikube/files for local assets ...
	I0913 18:21:11.588522   11676 start.go:296] duration metric: took 95.799581ms for postStartSetup
	I0913 18:21:11.588551   11676 main.go:141] libmachine: (addons-084503) Calling .GetConfigRaw
	I0913 18:21:11.589125   11676 main.go:141] libmachine: (addons-084503) Calling .GetIP
	I0913 18:21:11.591388   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:21:11.591678   11676 main.go:141] libmachine: (addons-084503) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:28:52", ip: ""} in network mk-addons-084503: {Iface:virbr1 ExpiryTime:2024-09-13 19:21:03 +0000 UTC Type:0 Mac:52:54:00:ca:28:52 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-084503 Clientid:01:52:54:00:ca:28:52}
	I0913 18:21:11.591712   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined IP address 192.168.39.228 and MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:21:11.592009   11676 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/addons-084503/config.json ...
	I0913 18:21:11.592182   11676 start.go:128] duration metric: took 23.265600516s to createHost
	I0913 18:21:11.592203   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHHostname
	I0913 18:21:11.594340   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:21:11.594683   11676 main.go:141] libmachine: (addons-084503) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:28:52", ip: ""} in network mk-addons-084503: {Iface:virbr1 ExpiryTime:2024-09-13 19:21:03 +0000 UTC Type:0 Mac:52:54:00:ca:28:52 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-084503 Clientid:01:52:54:00:ca:28:52}
	I0913 18:21:11.594707   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined IP address 192.168.39.228 and MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:21:11.594887   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHPort
	I0913 18:21:11.595146   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHKeyPath
	I0913 18:21:11.595298   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHKeyPath
	I0913 18:21:11.595400   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHUsername
	I0913 18:21:11.595544   11676 main.go:141] libmachine: Using SSH client type: native
	I0913 18:21:11.595745   11676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I0913 18:21:11.595757   11676 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0913 18:21:11.706667   11676 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726251671.682036282
	
	I0913 18:21:11.706692   11676 fix.go:216] guest clock: 1726251671.682036282
	I0913 18:21:11.706703   11676 fix.go:229] Guest: 2024-09-13 18:21:11.682036282 +0000 UTC Remote: 2024-09-13 18:21:11.592192509 +0000 UTC m=+23.367238734 (delta=89.843773ms)
	I0913 18:21:11.706750   11676 fix.go:200] guest clock delta is within tolerance: 89.843773ms
	I0913 18:21:11.706758   11676 start.go:83] releasing machines lock for "addons-084503", held for 23.380245819s
	I0913 18:21:11.706780   11676 main.go:141] libmachine: (addons-084503) Calling .DriverName
	I0913 18:21:11.707020   11676 main.go:141] libmachine: (addons-084503) Calling .GetIP
	I0913 18:21:11.709252   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:21:11.709578   11676 main.go:141] libmachine: (addons-084503) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:28:52", ip: ""} in network mk-addons-084503: {Iface:virbr1 ExpiryTime:2024-09-13 19:21:03 +0000 UTC Type:0 Mac:52:54:00:ca:28:52 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-084503 Clientid:01:52:54:00:ca:28:52}
	I0913 18:21:11.709602   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined IP address 192.168.39.228 and MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:21:11.709756   11676 main.go:141] libmachine: (addons-084503) Calling .DriverName
	I0913 18:21:11.710204   11676 main.go:141] libmachine: (addons-084503) Calling .DriverName
	I0913 18:21:11.710368   11676 main.go:141] libmachine: (addons-084503) Calling .DriverName
	I0913 18:21:11.710487   11676 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0913 18:21:11.710541   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHHostname
	I0913 18:21:11.710552   11676 ssh_runner.go:195] Run: cat /version.json
	I0913 18:21:11.710580   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHHostname
	I0913 18:21:11.713152   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:21:11.713177   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:21:11.713575   11676 main.go:141] libmachine: (addons-084503) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:28:52", ip: ""} in network mk-addons-084503: {Iface:virbr1 ExpiryTime:2024-09-13 19:21:03 +0000 UTC Type:0 Mac:52:54:00:ca:28:52 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-084503 Clientid:01:52:54:00:ca:28:52}
	I0913 18:21:11.713610   11676 main.go:141] libmachine: (addons-084503) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:28:52", ip: ""} in network mk-addons-084503: {Iface:virbr1 ExpiryTime:2024-09-13 19:21:03 +0000 UTC Type:0 Mac:52:54:00:ca:28:52 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-084503 Clientid:01:52:54:00:ca:28:52}
	I0913 18:21:11.713633   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined IP address 192.168.39.228 and MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:21:11.713686   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined IP address 192.168.39.228 and MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:21:11.713778   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHPort
	I0913 18:21:11.713958   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHPort
	I0913 18:21:11.713991   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHKeyPath
	I0913 18:21:11.714082   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHKeyPath
	I0913 18:21:11.714151   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHUsername
	I0913 18:21:11.714214   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHUsername
	I0913 18:21:11.714266   11676 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3886/.minikube/machines/addons-084503/id_rsa Username:docker}
	I0913 18:21:11.714304   11676 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3886/.minikube/machines/addons-084503/id_rsa Username:docker}
	I0913 18:21:11.817604   11676 ssh_runner.go:195] Run: systemctl --version
	I0913 18:21:11.823218   11676 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0913 18:21:11.828449   11676 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0913 18:21:11.828516   11676 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0913 18:21:11.844620   11676 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0913 18:21:11.844653   11676 start.go:495] detecting cgroup driver to use...
	I0913 18:21:11.844803   11676 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 18:21:11.862644   11676 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0913 18:21:11.873558   11676 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0913 18:21:11.883881   11676 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0913 18:21:11.883942   11676 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0913 18:21:11.894379   11676 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0913 18:21:11.905067   11676 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0913 18:21:11.915634   11676 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0913 18:21:11.926156   11676 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0913 18:21:11.936659   11676 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0913 18:21:11.946986   11676 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0913 18:21:11.957401   11676 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0913 18:21:11.967932   11676 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0913 18:21:11.977336   11676 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0913 18:21:11.986946   11676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 18:21:12.095232   11676 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0913 18:21:12.113617   11676 start.go:495] detecting cgroup driver to use...
	I0913 18:21:12.113692   11676 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0913 18:21:12.128963   11676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0913 18:21:12.142077   11676 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0913 18:21:12.160965   11676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0913 18:21:12.174958   11676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0913 18:21:12.188443   11676 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0913 18:21:12.220010   11676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0913 18:21:12.233650   11676 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 18:21:12.252083   11676 ssh_runner.go:195] Run: which cri-dockerd
	I0913 18:21:12.255764   11676 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0913 18:21:12.264951   11676 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0913 18:21:12.281208   11676 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0913 18:21:12.394486   11676 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0913 18:21:12.520036   11676 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0913 18:21:12.520176   11676 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0913 18:21:12.536824   11676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 18:21:12.647139   11676 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0913 18:21:14.981111   11676 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.333928549s)
	I0913 18:21:14.981188   11676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0913 18:21:14.995257   11676 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0913 18:21:15.011854   11676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0913 18:21:15.025745   11676 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0913 18:21:15.135715   11676 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0913 18:21:15.262170   11676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 18:21:15.373674   11676 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0913 18:21:15.391029   11676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0913 18:21:15.404234   11676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 18:21:15.518770   11676 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0913 18:21:15.592692   11676 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0913 18:21:15.592790   11676 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0913 18:21:15.598681   11676 start.go:563] Will wait 60s for crictl version
	I0913 18:21:15.598747   11676 ssh_runner.go:195] Run: which crictl
	I0913 18:21:15.602760   11676 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0913 18:21:15.637876   11676 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0913 18:21:15.637953   11676 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0913 18:21:15.664174   11676 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0913 18:21:15.689370   11676 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0913 18:21:15.689411   11676 main.go:141] libmachine: (addons-084503) Calling .GetIP
	I0913 18:21:15.691947   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:21:15.692274   11676 main.go:141] libmachine: (addons-084503) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:28:52", ip: ""} in network mk-addons-084503: {Iface:virbr1 ExpiryTime:2024-09-13 19:21:03 +0000 UTC Type:0 Mac:52:54:00:ca:28:52 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-084503 Clientid:01:52:54:00:ca:28:52}
	I0913 18:21:15.692304   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined IP address 192.168.39.228 and MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:21:15.692482   11676 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0913 18:21:15.696435   11676 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 18:21:15.708704   11676 kubeadm.go:883] updating cluster {Name:addons-084503 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-084503 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.228 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0913 18:21:15.708813   11676 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 18:21:15.708863   11676 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0913 18:21:15.723813   11676 docker.go:685] Got preloaded images: 
	I0913 18:21:15.723834   11676 docker.go:691] registry.k8s.io/kube-apiserver:v1.31.1 wasn't preloaded
	I0913 18:21:15.723879   11676 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0913 18:21:15.733446   11676 ssh_runner.go:195] Run: which lz4
	I0913 18:21:15.737191   11676 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0913 18:21:15.741026   11676 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0913 18:21:15.741059   11676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3886/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (342028912 bytes)
	I0913 18:21:16.854823   11676 docker.go:649] duration metric: took 1.117671283s to copy over tarball
	I0913 18:21:16.854920   11676 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0913 18:21:18.733673   11676 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.878719741s)
	I0913 18:21:18.733715   11676 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0913 18:21:18.773423   11676 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0913 18:21:18.784163   11676 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0913 18:21:18.800511   11676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 18:21:18.910714   11676 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0913 18:21:22.136238   11676 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.225493547s)
	I0913 18:21:22.136315   11676 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0913 18:21:22.171204   11676 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0913 18:21:22.171232   11676 cache_images.go:84] Images are preloaded, skipping loading
	I0913 18:21:22.171248   11676 kubeadm.go:934] updating node { 192.168.39.228 8443 v1.31.1 docker true true} ...
	I0913 18:21:22.171366   11676 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-084503 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.228
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-084503 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0913 18:21:22.171433   11676 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0913 18:21:22.226725   11676 cni.go:84] Creating CNI manager for ""
	I0913 18:21:22.226760   11676 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0913 18:21:22.226774   11676 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0913 18:21:22.226799   11676 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.228 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-084503 NodeName:addons-084503 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.228"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.228 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0913 18:21:22.226981   11676 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.228
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-084503"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.228
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.228"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0913 18:21:22.227048   11676 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0913 18:21:22.236299   11676 binaries.go:44] Found k8s binaries, skipping transfer
	I0913 18:21:22.236362   11676 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0913 18:21:22.245996   11676 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0913 18:21:22.261996   11676 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0913 18:21:22.277977   11676 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0913 18:21:22.294247   11676 ssh_runner.go:195] Run: grep 192.168.39.228	control-plane.minikube.internal$ /etc/hosts
	I0913 18:21:22.297892   11676 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.228	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 18:21:22.309425   11676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 18:21:22.417380   11676 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 18:21:22.438260   11676 certs.go:68] Setting up /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/addons-084503 for IP: 192.168.39.228
	I0913 18:21:22.438285   11676 certs.go:194] generating shared ca certs ...
	I0913 18:21:22.438305   11676 certs.go:226] acquiring lock for ca certs: {Name:mkf1c42dc750889d63f6afb243288428d7508077 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:21:22.438485   11676 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19636-3886/.minikube/ca.key
	I0913 18:21:22.721483   11676 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19636-3886/.minikube/ca.crt ...
	I0913 18:21:22.721513   11676 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3886/.minikube/ca.crt: {Name:mkcb4eae2900a16a07d8872a7d8e08f43cb20216 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:21:22.721687   11676 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19636-3886/.minikube/ca.key ...
	I0913 18:21:22.721699   11676 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3886/.minikube/ca.key: {Name:mk956e437eba8cc42cad865328c8bcf13bb5c902 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:21:22.721773   11676 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19636-3886/.minikube/proxy-client-ca.key
	I0913 18:21:22.897308   11676 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19636-3886/.minikube/proxy-client-ca.crt ...
	I0913 18:21:22.897342   11676 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3886/.minikube/proxy-client-ca.crt: {Name:mk0f386cc20747b8c1bd0bd5dc090f0803670dee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:21:22.897538   11676 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19636-3886/.minikube/proxy-client-ca.key ...
	I0913 18:21:22.897553   11676 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3886/.minikube/proxy-client-ca.key: {Name:mk3990902a897e0e55cec983bc498b852c35bb0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:21:22.897651   11676 certs.go:256] generating profile certs ...
	I0913 18:21:22.897707   11676 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/addons-084503/client.key
	I0913 18:21:22.897730   11676 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/addons-084503/client.crt with IP's: []
	I0913 18:21:22.974402   11676 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/addons-084503/client.crt ...
	I0913 18:21:22.974430   11676 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/addons-084503/client.crt: {Name:mkc359f038b822ebe43ca5066691c3e687b14afc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:21:22.974612   11676 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/addons-084503/client.key ...
	I0913 18:21:22.974626   11676 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/addons-084503/client.key: {Name:mk840b20c569fce4bcaf026e22a011b3fb37000e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:21:22.974715   11676 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/addons-084503/apiserver.key.2e93452d
	I0913 18:21:22.974736   11676 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/addons-084503/apiserver.crt.2e93452d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.228]
	I0913 18:21:23.056322   11676 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/addons-084503/apiserver.crt.2e93452d ...
	I0913 18:21:23.056353   11676 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/addons-084503/apiserver.crt.2e93452d: {Name:mkb693b11913fe2582d312ea8e08402f23d5cddc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:21:23.056530   11676 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/addons-084503/apiserver.key.2e93452d ...
	I0913 18:21:23.056547   11676 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/addons-084503/apiserver.key.2e93452d: {Name:mk2e4c43040f52a40c06dd209317f11e5e916119 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:21:23.056653   11676 certs.go:381] copying /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/addons-084503/apiserver.crt.2e93452d -> /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/addons-084503/apiserver.crt
	I0913 18:21:23.056736   11676 certs.go:385] copying /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/addons-084503/apiserver.key.2e93452d -> /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/addons-084503/apiserver.key
	I0913 18:21:23.056783   11676 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/addons-084503/proxy-client.key
	I0913 18:21:23.056800   11676 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/addons-084503/proxy-client.crt with IP's: []
	I0913 18:21:23.325741   11676 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/addons-084503/proxy-client.crt ...
	I0913 18:21:23.325773   11676 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/addons-084503/proxy-client.crt: {Name:mk7d3af490eb16e9bd019206eb519e856b3e6a22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:21:23.364162   11676 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/addons-084503/proxy-client.key ...
	I0913 18:21:23.364200   11676 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/addons-084503/proxy-client.key: {Name:mk5e49954797489c3e31fa2c576746b2c91ff4d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:21:23.364444   11676 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3886/.minikube/certs/ca-key.pem (1675 bytes)
	I0913 18:21:23.364490   11676 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3886/.minikube/certs/ca.pem (1082 bytes)
	I0913 18:21:23.364527   11676 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3886/.minikube/certs/cert.pem (1123 bytes)
	I0913 18:21:23.364552   11676 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3886/.minikube/certs/key.pem (1675 bytes)
	I0913 18:21:23.365192   11676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3886/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0913 18:21:23.389102   11676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3886/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0913 18:21:23.412603   11676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3886/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0913 18:21:23.435816   11676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3886/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0913 18:21:23.459261   11676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/addons-084503/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0913 18:21:23.482996   11676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/addons-084503/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0913 18:21:23.506104   11676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/addons-084503/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0913 18:21:23.529497   11676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/addons-084503/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0913 18:21:23.552576   11676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-3886/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0913 18:21:23.575522   11676 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0913 18:21:23.591849   11676 ssh_runner.go:195] Run: openssl version
	I0913 18:21:23.597469   11676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0913 18:21:23.608743   11676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0913 18:21:23.613139   11676 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 18:21 /usr/share/ca-certificates/minikubeCA.pem
	I0913 18:21:23.613193   11676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0913 18:21:23.618983   11676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0913 18:21:23.629997   11676 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0913 18:21:23.633979   11676 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0913 18:21:23.634028   11676 kubeadm.go:392] StartCluster: {Name:addons-084503 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-084503 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.228 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 18:21:23.634150   11676 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0913 18:21:23.650642   11676 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0913 18:21:23.659902   11676 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0913 18:21:23.669028   11676 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 18:21:23.678280   11676 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0913 18:21:23.678301   11676 kubeadm.go:157] found existing configuration files:
	
	I0913 18:21:23.678377   11676 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0913 18:21:23.687043   11676 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0913 18:21:23.687103   11676 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 18:21:23.696171   11676 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0913 18:21:23.704956   11676 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0913 18:21:23.705014   11676 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 18:21:23.714102   11676 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0913 18:21:23.722764   11676 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0913 18:21:23.722824   11676 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 18:21:23.731925   11676 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0913 18:21:23.740508   11676 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0913 18:21:23.740576   11676 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 18:21:23.749620   11676 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0913 18:21:23.801024   11676 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0913 18:21:23.801148   11676 kubeadm.go:310] [preflight] Running pre-flight checks
	I0913 18:21:23.902246   11676 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0913 18:21:23.902361   11676 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0913 18:21:23.902487   11676 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0913 18:21:23.912893   11676 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0913 18:21:23.915227   11676 out.go:235]   - Generating certificates and keys ...
	I0913 18:21:23.915323   11676 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0913 18:21:23.915400   11676 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0913 18:21:24.232890   11676 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0913 18:21:24.363493   11676 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0913 18:21:24.486333   11676 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0913 18:21:24.629968   11676 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0913 18:21:24.731746   11676 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0913 18:21:24.731950   11676 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-084503 localhost] and IPs [192.168.39.228 127.0.0.1 ::1]
	I0913 18:21:24.784854   11676 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0913 18:21:24.785059   11676 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-084503 localhost] and IPs [192.168.39.228 127.0.0.1 ::1]
	I0913 18:21:24.930955   11676 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0913 18:21:25.056807   11676 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0913 18:21:25.158702   11676 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0913 18:21:25.158780   11676 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0913 18:21:25.356730   11676 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0913 18:21:25.431433   11676 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0913 18:21:25.507070   11676 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0913 18:21:25.586777   11676 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0913 18:21:25.682802   11676 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0913 18:21:25.683736   11676 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0913 18:21:25.687621   11676 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0913 18:21:25.689677   11676 out.go:235]   - Booting up control plane ...
	I0913 18:21:25.689785   11676 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0913 18:21:25.689878   11676 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0913 18:21:25.689967   11676 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0913 18:21:25.705681   11676 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0913 18:21:25.711700   11676 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0913 18:21:25.711755   11676 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0913 18:21:25.841225   11676 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0913 18:21:25.841361   11676 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0913 18:21:26.843024   11676 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002126029s
	I0913 18:21:26.843131   11676 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0913 18:21:31.844999   11676 kubeadm.go:310] [api-check] The API server is healthy after 5.001714755s
	I0913 18:21:31.856194   11676 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0913 18:21:31.878097   11676 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0913 18:21:31.913182   11676 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0913 18:21:31.913393   11676 kubeadm.go:310] [mark-control-plane] Marking the node addons-084503 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0913 18:21:31.927507   11676 kubeadm.go:310] [bootstrap-token] Using token: rusqsx.1jh04xftlo282v7w
	I0913 18:21:31.928844   11676 out.go:235]   - Configuring RBAC rules ...
	I0913 18:21:31.929012   11676 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0913 18:21:31.938425   11676 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0913 18:21:31.947454   11676 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0913 18:21:31.956283   11676 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0913 18:21:31.961646   11676 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0913 18:21:31.967305   11676 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0913 18:21:32.249879   11676 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0913 18:21:32.700684   11676 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0913 18:21:33.250555   11676 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0913 18:21:33.251459   11676 kubeadm.go:310] 
	I0913 18:21:33.251570   11676 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0913 18:21:33.251591   11676 kubeadm.go:310] 
	I0913 18:21:33.251707   11676 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0913 18:21:33.251718   11676 kubeadm.go:310] 
	I0913 18:21:33.251753   11676 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0913 18:21:33.251823   11676 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0913 18:21:33.251892   11676 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0913 18:21:33.251906   11676 kubeadm.go:310] 
	I0913 18:21:33.251959   11676 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0913 18:21:33.251968   11676 kubeadm.go:310] 
	I0913 18:21:33.252007   11676 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0913 18:21:33.252013   11676 kubeadm.go:310] 
	I0913 18:21:33.252056   11676 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0913 18:21:33.252142   11676 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0913 18:21:33.252222   11676 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0913 18:21:33.252229   11676 kubeadm.go:310] 
	I0913 18:21:33.252299   11676 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0913 18:21:33.252372   11676 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0913 18:21:33.252378   11676 kubeadm.go:310] 
	I0913 18:21:33.252460   11676 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token rusqsx.1jh04xftlo282v7w \
	I0913 18:21:33.252548   11676 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:78fb95de514fee3376583abaea329ecde0b7e894f8c56f1343f108c89d42a6c5 \
	I0913 18:21:33.252577   11676 kubeadm.go:310] 	--control-plane 
	I0913 18:21:33.252587   11676 kubeadm.go:310] 
	I0913 18:21:33.252674   11676 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0913 18:21:33.252692   11676 kubeadm.go:310] 
	I0913 18:21:33.252811   11676 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token rusqsx.1jh04xftlo282v7w \
	I0913 18:21:33.252904   11676 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:78fb95de514fee3376583abaea329ecde0b7e894f8c56f1343f108c89d42a6c5 
	I0913 18:21:33.253861   11676 kubeadm.go:310] W0913 18:21:23.777650    1592 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0913 18:21:33.254145   11676 kubeadm.go:310] W0913 18:21:23.778614    1592 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0913 18:21:33.254256   11676 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0913 18:21:33.254283   11676 cni.go:84] Creating CNI manager for ""
	I0913 18:21:33.254301   11676 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0913 18:21:33.255946   11676 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0913 18:21:33.257272   11676 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0913 18:21:33.267760   11676 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0913 18:21:33.288499   11676 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0913 18:21:33.288585   11676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 18:21:33.288612   11676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-084503 minikube.k8s.io/updated_at=2024_09_13T18_21_33_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=fdd33bebc6743cfd1c61ec7fe066add478610a92 minikube.k8s.io/name=addons-084503 minikube.k8s.io/primary=true
	I0913 18:21:33.299172   11676 ops.go:34] apiserver oom_adj: -16
	I0913 18:21:33.426805   11676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 18:21:33.927073   11676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 18:21:34.426941   11676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 18:21:34.926986   11676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 18:21:35.427178   11676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 18:21:35.927933   11676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 18:21:36.427872   11676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 18:21:36.927814   11676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 18:21:37.427488   11676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 18:21:37.532704   11676 kubeadm.go:1113] duration metric: took 4.24418132s to wait for elevateKubeSystemPrivileges
	I0913 18:21:37.532742   11676 kubeadm.go:394] duration metric: took 13.898719057s to StartCluster
	I0913 18:21:37.532765   11676 settings.go:142] acquiring lock: {Name:mk626cf63256ee26a00a2b8dcc8927558027a069 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:21:37.532916   11676 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19636-3886/kubeconfig
	I0913 18:21:37.533404   11676 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3886/kubeconfig: {Name:mk3f5963676b46c6419a79958f3628172b4cf5bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:21:37.533640   11676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0913 18:21:37.533668   11676 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.228 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 18:21:37.533739   11676 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0913 18:21:37.533857   11676 addons.go:69] Setting yakd=true in profile "addons-084503"
	I0913 18:21:37.533873   11676 config.go:182] Loaded profile config "addons-084503": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 18:21:37.533884   11676 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-084503"
	I0913 18:21:37.533873   11676 addons.go:69] Setting cloud-spanner=true in profile "addons-084503"
	I0913 18:21:37.533903   11676 addons.go:69] Setting volcano=true in profile "addons-084503"
	I0913 18:21:37.533897   11676 addons.go:69] Setting registry=true in profile "addons-084503"
	I0913 18:21:37.533915   11676 addons.go:234] Setting addon cloud-spanner=true in "addons-084503"
	I0913 18:21:37.533919   11676 addons.go:69] Setting metrics-server=true in profile "addons-084503"
	I0913 18:21:37.533921   11676 addons.go:234] Setting addon registry=true in "addons-084503"
	I0913 18:21:37.533925   11676 addons.go:69] Setting volumesnapshots=true in profile "addons-084503"
	I0913 18:21:37.533930   11676 addons.go:234] Setting addon metrics-server=true in "addons-084503"
	I0913 18:21:37.533933   11676 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-084503"
	I0913 18:21:37.533934   11676 addons.go:234] Setting addon volumesnapshots=true in "addons-084503"
	I0913 18:21:37.533956   11676 host.go:66] Checking if "addons-084503" exists ...
	I0913 18:21:37.533965   11676 addons.go:69] Setting inspektor-gadget=true in profile "addons-084503"
	I0913 18:21:37.533968   11676 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-084503"
	I0913 18:21:37.533970   11676 addons.go:69] Setting gcp-auth=true in profile "addons-084503"
	I0913 18:21:37.533972   11676 addons.go:69] Setting default-storageclass=true in profile "addons-084503"
	I0913 18:21:37.533976   11676 addons.go:234] Setting addon inspektor-gadget=true in "addons-084503"
	I0913 18:21:37.533978   11676 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-084503"
	I0913 18:21:37.533986   11676 mustload.go:65] Loading cluster: addons-084503
	I0913 18:21:37.533987   11676 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-084503"
	I0913 18:21:37.533984   11676 addons.go:69] Setting ingress=true in profile "addons-084503"
	I0913 18:21:37.533996   11676 host.go:66] Checking if "addons-084503" exists ...
	I0913 18:21:37.533896   11676 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-084503"
	I0913 18:21:37.534008   11676 addons.go:234] Setting addon ingress=true in "addons-084503"
	I0913 18:21:37.534013   11676 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-084503"
	I0913 18:21:37.534044   11676 host.go:66] Checking if "addons-084503" exists ...
	I0913 18:21:37.534169   11676 config.go:182] Loaded profile config "addons-084503": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 18:21:37.534436   11676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0913 18:21:37.534489   11676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0913 18:21:37.533887   11676 addons.go:69] Setting storage-provisioner=true in profile "addons-084503"
	I0913 18:21:37.534527   11676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:21:37.534531   11676 addons.go:234] Setting addon storage-provisioner=true in "addons-084503"
	I0913 18:21:37.534553   11676 host.go:66] Checking if "addons-084503" exists ...
	I0913 18:21:37.534557   11676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:21:37.533993   11676 host.go:66] Checking if "addons-084503" exists ...
	I0913 18:21:37.534625   11676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0913 18:21:37.534658   11676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:21:37.534494   11676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0913 18:21:37.534746   11676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:21:37.534904   11676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0913 18:21:37.534453   11676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0913 18:21:37.534917   11676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0913 18:21:37.534944   11676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:21:37.534960   11676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:21:37.533917   11676 addons.go:234] Setting addon volcano=true in "addons-084503"
	I0913 18:21:37.535093   11676 host.go:66] Checking if "addons-084503" exists ...
	I0913 18:21:37.534910   11676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0913 18:21:37.534945   11676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:21:37.535217   11676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:21:37.535520   11676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0913 18:21:37.535559   11676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:21:37.533956   11676 host.go:66] Checking if "addons-084503" exists ...
	I0913 18:21:37.533876   11676 addons.go:234] Setting addon yakd=true in "addons-084503"
	I0913 18:21:37.535970   11676 host.go:66] Checking if "addons-084503" exists ...
	I0913 18:21:37.536175   11676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0913 18:21:37.536216   11676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:21:37.533959   11676 host.go:66] Checking if "addons-084503" exists ...
	I0913 18:21:37.536355   11676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0913 18:21:37.536389   11676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:21:37.533868   11676 addons.go:69] Setting ingress-dns=true in profile "addons-084503"
	I0913 18:21:37.536461   11676 addons.go:234] Setting addon ingress-dns=true in "addons-084503"
	I0913 18:21:37.536500   11676 host.go:66] Checking if "addons-084503" exists ...
	I0913 18:21:37.533959   11676 host.go:66] Checking if "addons-084503" exists ...
	I0913 18:21:37.533959   11676 host.go:66] Checking if "addons-084503" exists ...
	I0913 18:21:37.536700   11676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0913 18:21:37.536728   11676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:21:37.536926   11676 out.go:177] * Verifying Kubernetes components...
	I0913 18:21:37.536975   11676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0913 18:21:37.537015   11676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:21:37.538426   11676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 18:21:37.556754   11676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37765
	I0913 18:21:37.556775   11676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44879
	I0913 18:21:37.556851   11676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42473
	I0913 18:21:37.570514   11676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40913
	I0913 18:21:37.570780   11676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39721
	I0913 18:21:37.570518   11676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36331
	I0913 18:21:37.570518   11676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41285
	I0913 18:21:37.571031   11676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0913 18:21:37.571070   11676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:21:37.571093   11676 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:21:37.571106   11676 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:21:37.571247   11676 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:21:37.571433   11676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0913 18:21:37.571454   11676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:21:37.572187   11676 main.go:141] libmachine: Using API Version  1
	I0913 18:21:37.572204   11676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:21:37.572258   11676 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:21:37.572268   11676 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:21:37.572332   11676 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:21:37.572388   11676 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:21:37.572402   11676 main.go:141] libmachine: Using API Version  1
	I0913 18:21:37.572412   11676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:21:37.572501   11676 main.go:141] libmachine: Using API Version  1
	I0913 18:21:37.572522   11676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:21:37.572717   11676 main.go:141] libmachine: Using API Version  1
	I0913 18:21:37.572732   11676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:21:37.572855   11676 main.go:141] libmachine: Using API Version  1
	I0913 18:21:37.572866   11676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:21:37.572921   11676 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:21:37.573032   11676 main.go:141] libmachine: Using API Version  1
	I0913 18:21:37.573043   11676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:21:37.573155   11676 main.go:141] libmachine: Using API Version  1
	I0913 18:21:37.573165   11676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:21:37.573214   11676 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:21:37.573954   11676 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:21:37.573957   11676 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:21:37.574040   11676 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:21:37.574336   11676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0913 18:21:37.574389   11676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:21:37.574441   11676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0913 18:21:37.574470   11676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:21:37.574752   11676 main.go:141] libmachine: (addons-084503) Calling .GetState
	I0913 18:21:37.574768   11676 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:21:37.574831   11676 main.go:141] libmachine: (addons-084503) Calling .GetState
	I0913 18:21:37.574875   11676 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:21:37.575021   11676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0913 18:21:37.575039   11676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:21:37.575054   11676 main.go:141] libmachine: (addons-084503) Calling .GetState
	I0913 18:21:37.576333   11676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0913 18:21:37.576367   11676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:21:37.578594   11676 host.go:66] Checking if "addons-084503" exists ...
	I0913 18:21:37.578978   11676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0913 18:21:37.579016   11676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:21:37.580970   11676 addons.go:234] Setting addon default-storageclass=true in "addons-084503"
	I0913 18:21:37.581014   11676 host.go:66] Checking if "addons-084503" exists ...
	I0913 18:21:37.581370   11676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0913 18:21:37.581401   11676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:21:37.591948   11676 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-084503"
	I0913 18:21:37.592014   11676 host.go:66] Checking if "addons-084503" exists ...
	I0913 18:21:37.592430   11676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0913 18:21:37.592483   11676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:21:37.612629   11676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40029
	I0913 18:21:37.613197   11676 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:21:37.613781   11676 main.go:141] libmachine: Using API Version  1
	I0913 18:21:37.613800   11676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:21:37.614187   11676 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:21:37.614739   11676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0913 18:21:37.614789   11676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:21:37.615822   11676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40797
	I0913 18:21:37.616304   11676 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:21:37.616788   11676 main.go:141] libmachine: Using API Version  1
	I0913 18:21:37.616820   11676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:21:37.619230   11676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35371
	I0913 18:21:37.619245   11676 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:21:37.619942   11676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0913 18:21:37.619983   11676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:21:37.620182   11676 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:21:37.620750   11676 main.go:141] libmachine: Using API Version  1
	I0913 18:21:37.620769   11676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:21:37.621053   11676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35417
	I0913 18:21:37.621231   11676 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:21:37.621469   11676 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:21:37.621827   11676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0913 18:21:37.621869   11676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:21:37.622551   11676 main.go:141] libmachine: Using API Version  1
	I0913 18:21:37.622606   11676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:21:37.625190   11676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40067
	I0913 18:21:37.625204   11676 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:21:37.625437   11676 main.go:141] libmachine: (addons-084503) Calling .GetState
	I0913 18:21:37.625504   11676 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:21:37.625936   11676 main.go:141] libmachine: Using API Version  1
	I0913 18:21:37.625975   11676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:21:37.626277   11676 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:21:37.626799   11676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0913 18:21:37.626834   11676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:21:37.627535   11676 main.go:141] libmachine: (addons-084503) Calling .DriverName
	I0913 18:21:37.629389   11676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33007
	I0913 18:21:37.629401   11676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38659
	I0913 18:21:37.629675   11676 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0913 18:21:37.629906   11676 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:21:37.630043   11676 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:21:37.630223   11676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43159
	I0913 18:21:37.630564   11676 main.go:141] libmachine: Using API Version  1
	I0913 18:21:37.630581   11676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:21:37.630773   11676 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0913 18:21:37.630794   11676 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0913 18:21:37.630815   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHHostname
	I0913 18:21:37.630968   11676 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:21:37.631108   11676 main.go:141] libmachine: Using API Version  1
	I0913 18:21:37.631123   11676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:21:37.631186   11676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46269
	I0913 18:21:37.631491   11676 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:21:37.631904   11676 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:21:37.632006   11676 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:21:37.632508   11676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0913 18:21:37.632544   11676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:21:37.632667   11676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0913 18:21:37.632699   11676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:21:37.633028   11676 main.go:141] libmachine: Using API Version  1
	I0913 18:21:37.633044   11676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:21:37.633502   11676 main.go:141] libmachine: Using API Version  1
	I0913 18:21:37.633518   11676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:21:37.633888   11676 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:21:37.634086   11676 main.go:141] libmachine: (addons-084503) Calling .GetState
	I0913 18:21:37.634781   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:21:37.635152   11676 main.go:141] libmachine: (addons-084503) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:28:52", ip: ""} in network mk-addons-084503: {Iface:virbr1 ExpiryTime:2024-09-13 19:21:03 +0000 UTC Type:0 Mac:52:54:00:ca:28:52 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-084503 Clientid:01:52:54:00:ca:28:52}
	I0913 18:21:37.635175   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined IP address 192.168.39.228 and MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:21:37.635216   11676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33037
	I0913 18:21:37.635563   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHPort
	I0913 18:21:37.635613   11676 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:21:37.635907   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHKeyPath
	I0913 18:21:37.636035   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHUsername
	I0913 18:21:37.636136   11676 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3886/.minikube/machines/addons-084503/id_rsa Username:docker}
	I0913 18:21:37.636653   11676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0913 18:21:37.636692   11676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:21:37.636884   11676 main.go:141] libmachine: (addons-084503) Calling .DriverName
	I0913 18:21:37.636965   11676 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:21:37.638849   11676 main.go:141] libmachine: Using API Version  1
	I0913 18:21:37.638876   11676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:21:37.639286   11676 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:21:37.639735   11676 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0913 18:21:37.639806   11676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0913 18:21:37.639843   11676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:21:37.640146   11676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39003
	I0913 18:21:37.640294   11676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38973
	I0913 18:21:37.640855   11676 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:21:37.641397   11676 main.go:141] libmachine: Using API Version  1
	I0913 18:21:37.641416   11676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:21:37.641825   11676 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:21:37.642285   11676 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0913 18:21:37.642431   11676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0913 18:21:37.642469   11676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:21:37.642748   11676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36811
	I0913 18:21:37.643226   11676 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:21:37.643803   11676 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:21:37.643970   11676 main.go:141] libmachine: Using API Version  1
	I0913 18:21:37.643982   11676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:21:37.644548   11676 main.go:141] libmachine: Using API Version  1
	I0913 18:21:37.644565   11676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:21:37.644579   11676 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:21:37.644637   11676 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0913 18:21:37.644817   11676 main.go:141] libmachine: (addons-084503) Calling .GetState
	I0913 18:21:37.644994   11676 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:21:37.645201   11676 main.go:141] libmachine: (addons-084503) Calling .GetState
	I0913 18:21:37.645928   11676 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0913 18:21:37.645947   11676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0913 18:21:37.645965   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHHostname
	I0913 18:21:37.649393   11676 main.go:141] libmachine: (addons-084503) Calling .DriverName
	I0913 18:21:37.650043   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:21:37.650612   11676 main.go:141] libmachine: (addons-084503) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:28:52", ip: ""} in network mk-addons-084503: {Iface:virbr1 ExpiryTime:2024-09-13 19:21:03 +0000 UTC Type:0 Mac:52:54:00:ca:28:52 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-084503 Clientid:01:52:54:00:ca:28:52}
	I0913 18:21:37.650655   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined IP address 192.168.39.228 and MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:21:37.651014   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHPort
	I0913 18:21:37.651290   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHKeyPath
	I0913 18:21:37.651453   11676 main.go:141] libmachine: (addons-084503) Calling .DriverName
	I0913 18:21:37.651511   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHUsername
	I0913 18:21:37.651617   11676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44841
	I0913 18:21:37.652346   11676 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3886/.minikube/machines/addons-084503/id_rsa Username:docker}
	I0913 18:21:37.652772   11676 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0913 18:21:37.653445   11676 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:21:37.653622   11676 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0913 18:21:37.654251   11676 main.go:141] libmachine: Using API Version  1
	I0913 18:21:37.654269   11676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:21:37.654785   11676 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:21:37.655010   11676 main.go:141] libmachine: (addons-084503) Calling .DriverName
	I0913 18:21:37.655106   11676 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0913 18:21:37.655120   11676 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0913 18:21:37.655146   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHHostname
	I0913 18:21:37.655925   11676 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0913 18:21:37.655942   11676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0913 18:21:37.655961   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHHostname
	I0913 18:21:37.666540   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHPort
	I0913 18:21:37.666625   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:21:37.666660   11676 main.go:141] libmachine: (addons-084503) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:28:52", ip: ""} in network mk-addons-084503: {Iface:virbr1 ExpiryTime:2024-09-13 19:21:03 +0000 UTC Type:0 Mac:52:54:00:ca:28:52 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-084503 Clientid:01:52:54:00:ca:28:52}
	I0913 18:21:37.666678   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined IP address 192.168.39.228 and MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:21:37.666870   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHKeyPath
	I0913 18:21:37.667076   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHUsername
	I0913 18:21:37.669289   11676 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3886/.minikube/machines/addons-084503/id_rsa Username:docker}
	I0913 18:21:37.673243   11676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41085
	I0913 18:21:37.673637   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:21:37.673686   11676 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:21:37.674110   11676 main.go:141] libmachine: (addons-084503) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:28:52", ip: ""} in network mk-addons-084503: {Iface:virbr1 ExpiryTime:2024-09-13 19:21:03 +0000 UTC Type:0 Mac:52:54:00:ca:28:52 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-084503 Clientid:01:52:54:00:ca:28:52}
	I0913 18:21:37.674142   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined IP address 192.168.39.228 and MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:21:37.674503   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHPort
	I0913 18:21:37.674739   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHKeyPath
	I0913 18:21:37.674956   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHUsername
	I0913 18:21:37.674965   11676 main.go:141] libmachine: Using API Version  1
	I0913 18:21:37.675099   11676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:21:37.675128   11676 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3886/.minikube/machines/addons-084503/id_rsa Username:docker}
	I0913 18:21:37.675491   11676 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:21:37.675693   11676 main.go:141] libmachine: (addons-084503) Calling .GetState
	I0913 18:21:37.677440   11676 main.go:141] libmachine: (addons-084503) Calling .DriverName
	I0913 18:21:37.679407   11676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35033
	I0913 18:21:37.679902   11676 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0913 18:21:37.680198   11676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43875
	I0913 18:21:37.680498   11676 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:21:37.680835   11676 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:21:37.681292   11676 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0913 18:21:37.681311   11676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0913 18:21:37.681330   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHHostname
	I0913 18:21:37.681606   11676 main.go:141] libmachine: Using API Version  1
	I0913 18:21:37.681627   11676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:21:37.681786   11676 main.go:141] libmachine: Using API Version  1
	I0913 18:21:37.681799   11676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:21:37.681864   11676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37087
	I0913 18:21:37.682196   11676 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:21:37.682289   11676 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:21:37.682509   11676 main.go:141] libmachine: (addons-084503) Calling .GetState
	I0913 18:21:37.682538   11676 main.go:141] libmachine: (addons-084503) Calling .GetState
	I0913 18:21:37.683196   11676 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:21:37.684288   11676 main.go:141] libmachine: Using API Version  1
	I0913 18:21:37.684306   11676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:21:37.684561   11676 main.go:141] libmachine: (addons-084503) Calling .DriverName
	I0913 18:21:37.685173   11676 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:21:37.685230   11676 main.go:141] libmachine: (addons-084503) Calling .DriverName
	I0913 18:21:37.685290   11676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46575
	I0913 18:21:37.685611   11676 main.go:141] libmachine: (addons-084503) Calling .GetState
	I0913 18:21:37.686475   11676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35089
	I0913 18:21:37.686603   11676 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:21:37.686613   11676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35657
	I0913 18:21:37.687040   11676 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:21:37.687232   11676 main.go:141] libmachine: Using API Version  1
	I0913 18:21:37.687256   11676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:21:37.687481   11676 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 18:21:37.687574   11676 main.go:141] libmachine: Using API Version  1
	I0913 18:21:37.687576   11676 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0913 18:21:37.687612   11676 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:21:37.687591   11676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:21:37.687708   11676 main.go:141] libmachine: (addons-084503) Calling .DriverName
	I0913 18:21:37.687804   11676 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:21:37.687884   11676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43869
	I0913 18:21:37.687993   11676 main.go:141] libmachine: (addons-084503) Calling .GetState
	I0913 18:21:37.688072   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:21:37.688145   11676 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:21:37.688197   11676 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:21:37.688458   11676 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0913 18:21:37.688471   11676 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0913 18:21:37.688768   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHHostname
	I0913 18:21:37.688809   11676 main.go:141] libmachine: Using API Version  1
	I0913 18:21:37.688819   11676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:21:37.688848   11676 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 18:21:37.688862   11676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0913 18:21:37.688879   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHHostname
	I0913 18:21:37.689745   11676 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0913 18:21:37.689760   11676 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0913 18:21:37.689780   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHHostname
	I0913 18:21:37.689988   11676 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:21:37.690052   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHPort
	I0913 18:21:37.690104   11676 main.go:141] libmachine: (addons-084503) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:28:52", ip: ""} in network mk-addons-084503: {Iface:virbr1 ExpiryTime:2024-09-13 19:21:03 +0000 UTC Type:0 Mac:52:54:00:ca:28:52 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-084503 Clientid:01:52:54:00:ca:28:52}
	I0913 18:21:37.690118   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined IP address 192.168.39.228 and MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:21:37.690149   11676 main.go:141] libmachine: (addons-084503) Calling .GetState
	I0913 18:21:37.690198   11676 main.go:141] libmachine: (addons-084503) Calling .GetState
	I0913 18:21:37.690482   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHKeyPath
	I0913 18:21:37.690828   11676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33901
	I0913 18:21:37.691124   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHUsername
	I0913 18:21:37.691415   11676 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3886/.minikube/machines/addons-084503/id_rsa Username:docker}
	I0913 18:21:37.692125   11676 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:21:37.692500   11676 main.go:141] libmachine: (addons-084503) Calling .DriverName
	I0913 18:21:37.692651   11676 main.go:141] libmachine: (addons-084503) Calling .DriverName
	I0913 18:21:37.693175   11676 main.go:141] libmachine: Using API Version  1
	I0913 18:21:37.693195   11676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:21:37.693220   11676 main.go:141] libmachine: Using API Version  1
	I0913 18:21:37.693388   11676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:21:37.693933   11676 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:21:37.694100   11676 out.go:177]   - Using image docker.io/registry:2.8.3
	I0913 18:21:37.694583   11676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0913 18:21:37.694623   11676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:21:37.694946   11676 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0913 18:21:37.695289   11676 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:21:37.695509   11676 main.go:141] libmachine: (addons-084503) Calling .DriverName
	I0913 18:21:37.695512   11676 main.go:141] libmachine: (addons-084503) Calling .GetState
	I0913 18:21:37.696008   11676 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0913 18:21:37.696724   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:21:37.696731   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:21:37.696742   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:21:37.696762   11676 main.go:141] libmachine: (addons-084503) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:28:52", ip: ""} in network mk-addons-084503: {Iface:virbr1 ExpiryTime:2024-09-13 19:21:03 +0000 UTC Type:0 Mac:52:54:00:ca:28:52 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-084503 Clientid:01:52:54:00:ca:28:52}
	I0913 18:21:37.696780   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined IP address 192.168.39.228 and MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:21:37.696790   11676 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0913 18:21:37.696842   11676 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0913 18:21:37.697023   11676 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0913 18:21:37.697036   11676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0913 18:21:37.697057   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHHostname
	I0913 18:21:37.697077   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHPort
	I0913 18:21:37.697268   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHKeyPath
	I0913 18:21:37.697348   11676 main.go:141] libmachine: (addons-084503) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:28:52", ip: ""} in network mk-addons-084503: {Iface:virbr1 ExpiryTime:2024-09-13 19:21:03 +0000 UTC Type:0 Mac:52:54:00:ca:28:52 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-084503 Clientid:01:52:54:00:ca:28:52}
	I0913 18:21:37.697363   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined IP address 192.168.39.228 and MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:21:37.697528   11676 main.go:141] libmachine: (addons-084503) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:28:52", ip: ""} in network mk-addons-084503: {Iface:virbr1 ExpiryTime:2024-09-13 19:21:03 +0000 UTC Type:0 Mac:52:54:00:ca:28:52 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-084503 Clientid:01:52:54:00:ca:28:52}
	I0913 18:21:37.697538   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHUsername
	I0913 18:21:37.697546   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined IP address 192.168.39.228 and MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:21:37.697708   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHPort
	I0913 18:21:37.697729   11676 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3886/.minikube/machines/addons-084503/id_rsa Username:docker}
	I0913 18:21:37.697766   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHPort
	I0913 18:21:37.697906   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHKeyPath
	I0913 18:21:37.698049   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHKeyPath
	I0913 18:21:37.698094   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHUsername
	I0913 18:21:37.698445   11676 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0913 18:21:37.698459   11676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0913 18:21:37.698467   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHUsername
	I0913 18:21:37.698476   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHHostname
	I0913 18:21:37.698515   11676 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3886/.minikube/machines/addons-084503/id_rsa Username:docker}
	I0913 18:21:37.699420   11676 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3886/.minikube/machines/addons-084503/id_rsa Username:docker}
	I0913 18:21:37.699468   11676 main.go:141] libmachine: (addons-084503) Calling .DriverName
	I0913 18:21:37.699436   11676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41901
	I0913 18:21:37.699881   11676 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0913 18:21:37.700246   11676 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:21:37.700716   11676 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0913 18:21:37.701266   11676 main.go:141] libmachine: Using API Version  1
	I0913 18:21:37.701285   11676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:21:37.701662   11676 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:21:37.701852   11676 main.go:141] libmachine: (addons-084503) Calling .GetState
	I0913 18:21:37.701885   11676 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0913 18:21:37.702007   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:21:37.702862   11676 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0913 18:21:37.702991   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:21:37.703184   11676 main.go:141] libmachine: (addons-084503) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:28:52", ip: ""} in network mk-addons-084503: {Iface:virbr1 ExpiryTime:2024-09-13 19:21:03 +0000 UTC Type:0 Mac:52:54:00:ca:28:52 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-084503 Clientid:01:52:54:00:ca:28:52}
	I0913 18:21:37.703239   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined IP address 192.168.39.228 and MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:21:37.703364   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHPort
	I0913 18:21:37.703492   11676 main.go:141] libmachine: (addons-084503) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:28:52", ip: ""} in network mk-addons-084503: {Iface:virbr1 ExpiryTime:2024-09-13 19:21:03 +0000 UTC Type:0 Mac:52:54:00:ca:28:52 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-084503 Clientid:01:52:54:00:ca:28:52}
	I0913 18:21:37.703519   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined IP address 192.168.39.228 and MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:21:37.703572   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHKeyPath
	I0913 18:21:37.703677   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHPort
	I0913 18:21:37.703750   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHUsername
	I0913 18:21:37.703862   11676 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0913 18:21:37.704017   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHKeyPath
	I0913 18:21:37.704009   11676 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3886/.minikube/machines/addons-084503/id_rsa Username:docker}
	I0913 18:21:37.704230   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHUsername
	I0913 18:21:37.704380   11676 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3886/.minikube/machines/addons-084503/id_rsa Username:docker}
	I0913 18:21:37.704460   11676 main.go:141] libmachine: (addons-084503) Calling .DriverName
	I0913 18:21:37.704914   11676 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0913 18:21:37.705917   11676 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0913 18:21:37.705928   11676 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0913 18:21:37.706695   11676 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0913 18:21:37.706717   11676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0913 18:21:37.706730   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHHostname
	I0913 18:21:37.706883   11676 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0913 18:21:37.706903   11676 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0913 18:21:37.706920   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHHostname
	I0913 18:21:37.708678   11676 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0913 18:21:37.709649   11676 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0913 18:21:37.710528   11676 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0913 18:21:37.710549   11676 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0913 18:21:37.710569   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHHostname
	I0913 18:21:37.711097   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:21:37.711117   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:21:37.711655   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHPort
	I0913 18:21:37.711684   11676 main.go:141] libmachine: (addons-084503) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:28:52", ip: ""} in network mk-addons-084503: {Iface:virbr1 ExpiryTime:2024-09-13 19:21:03 +0000 UTC Type:0 Mac:52:54:00:ca:28:52 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-084503 Clientid:01:52:54:00:ca:28:52}
	I0913 18:21:37.711654   11676 main.go:141] libmachine: (addons-084503) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:28:52", ip: ""} in network mk-addons-084503: {Iface:virbr1 ExpiryTime:2024-09-13 19:21:03 +0000 UTC Type:0 Mac:52:54:00:ca:28:52 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-084503 Clientid:01:52:54:00:ca:28:52}
	I0913 18:21:37.711726   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined IP address 192.168.39.228 and MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:21:37.711702   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined IP address 192.168.39.228 and MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:21:37.711859   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHKeyPath
	I0913 18:21:37.711960   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHPort
	I0913 18:21:37.711997   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHUsername
	I0913 18:21:37.712218   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHKeyPath
	I0913 18:21:37.712217   11676 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3886/.minikube/machines/addons-084503/id_rsa Username:docker}
	I0913 18:21:37.712554   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHUsername
	I0913 18:21:37.712709   11676 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3886/.minikube/machines/addons-084503/id_rsa Username:docker}
	I0913 18:21:37.714464   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:21:37.714915   11676 main.go:141] libmachine: (addons-084503) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:28:52", ip: ""} in network mk-addons-084503: {Iface:virbr1 ExpiryTime:2024-09-13 19:21:03 +0000 UTC Type:0 Mac:52:54:00:ca:28:52 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-084503 Clientid:01:52:54:00:ca:28:52}
	I0913 18:21:37.714935   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined IP address 192.168.39.228 and MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:21:37.715170   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHPort
	I0913 18:21:37.715334   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHKeyPath
	I0913 18:21:37.715483   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHUsername
	I0913 18:21:37.715589   11676 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3886/.minikube/machines/addons-084503/id_rsa Username:docker}
	I0913 18:21:37.715898   11676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38859
	W0913 18:21:37.715917   11676 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:60704->192.168.39.228:22: read: connection reset by peer
	I0913 18:21:37.715941   11676 retry.go:31] will retry after 361.289702ms: ssh: handshake failed: read tcp 192.168.39.1:60704->192.168.39.228:22: read: connection reset by peer
	I0913 18:21:37.716287   11676 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:21:37.716784   11676 main.go:141] libmachine: Using API Version  1
	I0913 18:21:37.716797   11676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:21:37.717138   11676 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:21:37.717288   11676 main.go:141] libmachine: (addons-084503) Calling .GetState
	I0913 18:21:37.718808   11676 main.go:141] libmachine: (addons-084503) Calling .DriverName
	I0913 18:21:37.720260   11676 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0913 18:21:37.721311   11676 out.go:177]   - Using image docker.io/busybox:stable
	I0913 18:21:37.722442   11676 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0913 18:21:37.722459   11676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0913 18:21:37.722479   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHHostname
	I0913 18:21:37.725649   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:21:37.726119   11676 main.go:141] libmachine: (addons-084503) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:28:52", ip: ""} in network mk-addons-084503: {Iface:virbr1 ExpiryTime:2024-09-13 19:21:03 +0000 UTC Type:0 Mac:52:54:00:ca:28:52 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-084503 Clientid:01:52:54:00:ca:28:52}
	I0913 18:21:37.726143   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined IP address 192.168.39.228 and MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:21:37.726370   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHPort
	I0913 18:21:37.726593   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHKeyPath
	I0913 18:21:37.726709   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHUsername
	I0913 18:21:37.726815   11676 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3886/.minikube/machines/addons-084503/id_rsa Username:docker}
	W0913 18:21:37.727719   11676 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:60732->192.168.39.228:22: read: connection reset by peer
	I0913 18:21:37.727738   11676 retry.go:31] will retry after 234.153185ms: ssh: handshake failed: read tcp 192.168.39.1:60732->192.168.39.228:22: read: connection reset by peer
	I0913 18:21:38.062927   11676 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0913 18:21:38.062951   11676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0913 18:21:38.117993   11676 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0913 18:21:38.118025   11676 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0913 18:21:38.159700   11676 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0913 18:21:38.159724   11676 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0913 18:21:38.172968   11676 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 18:21:38.173145   11676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0913 18:21:38.179031   11676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0913 18:21:38.185106   11676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0913 18:21:38.187922   11676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0913 18:21:38.194377   11676 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0913 18:21:38.194407   11676 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0913 18:21:38.218816   11676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 18:21:38.234708   11676 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0913 18:21:38.234741   11676 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0913 18:21:38.252850   11676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0913 18:21:38.271011   11676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0913 18:21:38.289156   11676 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0913 18:21:38.289182   11676 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0913 18:21:38.302538   11676 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0913 18:21:38.302564   11676 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0913 18:21:38.309277   11676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0913 18:21:38.326867   11676 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0913 18:21:38.326892   11676 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0913 18:21:38.332577   11676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0913 18:21:38.352750   11676 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0913 18:21:38.352780   11676 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0913 18:21:38.358038   11676 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0913 18:21:38.358067   11676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0913 18:21:38.375211   11676 node_ready.go:35] waiting up to 6m0s for node "addons-084503" to be "Ready" ...
	I0913 18:21:38.378012   11676 node_ready.go:49] node "addons-084503" has status "Ready":"True"
	I0913 18:21:38.378044   11676 node_ready.go:38] duration metric: took 2.791821ms for node "addons-084503" to be "Ready" ...
	I0913 18:21:38.378056   11676 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 18:21:38.384811   11676 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-b6b4x" in "kube-system" namespace to be "Ready" ...
	I0913 18:21:38.440207   11676 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0913 18:21:38.440235   11676 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0913 18:21:38.453166   11676 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0913 18:21:38.453187   11676 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0913 18:21:38.474517   11676 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0913 18:21:38.474539   11676 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0913 18:21:38.485425   11676 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0913 18:21:38.485448   11676 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0913 18:21:38.560078   11676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0913 18:21:38.561714   11676 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0913 18:21:38.561747   11676 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0913 18:21:38.581575   11676 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0913 18:21:38.581603   11676 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0913 18:21:38.608014   11676 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0913 18:21:38.608045   11676 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0913 18:21:38.637958   11676 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0913 18:21:38.637986   11676 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0913 18:21:38.650980   11676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0913 18:21:38.737342   11676 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0913 18:21:38.737375   11676 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0913 18:21:38.741455   11676 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0913 18:21:38.741489   11676 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0913 18:21:38.797406   11676 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0913 18:21:38.797439   11676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0913 18:21:38.808674   11676 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0913 18:21:38.808703   11676 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0913 18:21:38.877182   11676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0913 18:21:38.938764   11676 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0913 18:21:38.938792   11676 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0913 18:21:38.967028   11676 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0913 18:21:38.967056   11676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0913 18:21:39.015687   11676 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0913 18:21:39.015718   11676 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0913 18:21:39.138950   11676 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0913 18:21:39.138982   11676 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0913 18:21:39.141603   11676 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0913 18:21:39.141633   11676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0913 18:21:39.775392   11676 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0913 18:21:39.775414   11676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0913 18:21:39.826667   11676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0913 18:21:40.212995   11676 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0913 18:21:40.213015   11676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0913 18:21:40.401211   11676 pod_ready.go:103] pod "coredns-7c65d6cfc9-b6b4x" in "kube-system" namespace has status "Ready":"False"
	I0913 18:21:40.421896   11676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0913 18:21:40.554400   11676 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0913 18:21:40.554423   11676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0913 18:21:40.899050   11676 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0913 18:21:40.899076   11676 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0913 18:21:41.119809   11676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0913 18:21:41.403411   11676 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.230231512s)
	I0913 18:21:41.403445   11676 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0913 18:21:41.940359   11676 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-084503" context rescaled to 1 replicas
	I0913 18:21:42.302862   11676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.123795631s)
	I0913 18:21:42.302923   11676 main.go:141] libmachine: Making call to close driver server
	I0913 18:21:42.302936   11676 main.go:141] libmachine: (addons-084503) Calling .Close
	I0913 18:21:42.303269   11676 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:21:42.303289   11676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:21:42.303287   11676 main.go:141] libmachine: (addons-084503) DBG | Closing plugin on server side
	I0913 18:21:42.303304   11676 main.go:141] libmachine: Making call to close driver server
	I0913 18:21:42.303312   11676 main.go:141] libmachine: (addons-084503) Calling .Close
	I0913 18:21:42.303662   11676 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:21:42.303684   11676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:21:42.303706   11676 main.go:141] libmachine: (addons-084503) DBG | Closing plugin on server side
	I0913 18:21:42.479775   11676 pod_ready.go:103] pod "coredns-7c65d6cfc9-b6b4x" in "kube-system" namespace has status "Ready":"False"
	I0913 18:21:44.496658   11676 pod_ready.go:103] pod "coredns-7c65d6cfc9-b6b4x" in "kube-system" namespace has status "Ready":"False"
	I0913 18:21:44.702232   11676 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0913 18:21:44.702290   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHHostname
	I0913 18:21:44.705547   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:21:44.705985   11676 main.go:141] libmachine: (addons-084503) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:28:52", ip: ""} in network mk-addons-084503: {Iface:virbr1 ExpiryTime:2024-09-13 19:21:03 +0000 UTC Type:0 Mac:52:54:00:ca:28:52 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-084503 Clientid:01:52:54:00:ca:28:52}
	I0913 18:21:44.706014   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined IP address 192.168.39.228 and MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:21:44.706204   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHPort
	I0913 18:21:44.706411   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHKeyPath
	I0913 18:21:44.706565   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHUsername
	I0913 18:21:44.706706   11676 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3886/.minikube/machines/addons-084503/id_rsa Username:docker}
	I0913 18:21:45.311109   11676 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0913 18:21:45.657349   11676 addons.go:234] Setting addon gcp-auth=true in "addons-084503"
	I0913 18:21:45.657394   11676 host.go:66] Checking if "addons-084503" exists ...
	I0913 18:21:45.657682   11676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0913 18:21:45.657710   11676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:21:45.674001   11676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33219
	I0913 18:21:45.674391   11676 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:21:45.674874   11676 main.go:141] libmachine: Using API Version  1
	I0913 18:21:45.674900   11676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:21:45.675203   11676 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:21:45.675767   11676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0913 18:21:45.675818   11676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:21:45.691443   11676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36573
	I0913 18:21:45.691914   11676 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:21:45.692453   11676 main.go:141] libmachine: Using API Version  1
	I0913 18:21:45.692482   11676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:21:45.692782   11676 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:21:45.692982   11676 main.go:141] libmachine: (addons-084503) Calling .GetState
	I0913 18:21:45.694554   11676 main.go:141] libmachine: (addons-084503) Calling .DriverName
	I0913 18:21:45.694785   11676 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0913 18:21:45.694808   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHHostname
	I0913 18:21:45.697669   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:21:45.698119   11676 main.go:141] libmachine: (addons-084503) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:28:52", ip: ""} in network mk-addons-084503: {Iface:virbr1 ExpiryTime:2024-09-13 19:21:03 +0000 UTC Type:0 Mac:52:54:00:ca:28:52 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-084503 Clientid:01:52:54:00:ca:28:52}
	I0913 18:21:45.698152   11676 main.go:141] libmachine: (addons-084503) DBG | domain addons-084503 has defined IP address 192.168.39.228 and MAC address 52:54:00:ca:28:52 in network mk-addons-084503
	I0913 18:21:45.698331   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHPort
	I0913 18:21:45.698504   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHKeyPath
	I0913 18:21:45.698637   11676 main.go:141] libmachine: (addons-084503) Calling .GetSSHUsername
	I0913 18:21:45.698783   11676 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3886/.minikube/machines/addons-084503/id_rsa Username:docker}
	I0913 18:21:46.920312   11676 pod_ready.go:103] pod "coredns-7c65d6cfc9-b6b4x" in "kube-system" namespace has status "Ready":"False"
	I0913 18:21:47.580021   11676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.39487556s)
	I0913 18:21:47.580072   11676 main.go:141] libmachine: Making call to close driver server
	I0913 18:21:47.580084   11676 main.go:141] libmachine: (addons-084503) Calling .Close
	I0913 18:21:47.580090   11676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.392134154s)
	I0913 18:21:47.580138   11676 main.go:141] libmachine: Making call to close driver server
	I0913 18:21:47.580155   11676 main.go:141] libmachine: (addons-084503) Calling .Close
	I0913 18:21:47.580182   11676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.361338935s)
	I0913 18:21:47.580209   11676 main.go:141] libmachine: Making call to close driver server
	I0913 18:21:47.580217   11676 main.go:141] libmachine: (addons-084503) Calling .Close
	I0913 18:21:47.580286   11676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (9.327392757s)
	I0913 18:21:47.580359   11676 main.go:141] libmachine: Making call to close driver server
	I0913 18:21:47.580444   11676 main.go:141] libmachine: (addons-084503) Calling .Close
	I0913 18:21:47.580449   11676 main.go:141] libmachine: (addons-084503) DBG | Closing plugin on server side
	I0913 18:21:47.580474   11676 main.go:141] libmachine: (addons-084503) DBG | Closing plugin on server side
	I0913 18:21:47.580365   11676 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:21:47.580510   11676 main.go:141] libmachine: (addons-084503) DBG | Closing plugin on server side
	I0913 18:21:47.580512   11676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:21:47.580521   11676 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:21:47.580524   11676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (9.30948908s)
	I0913 18:21:47.580531   11676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:21:47.580531   11676 main.go:141] libmachine: Making call to close driver server
	I0913 18:21:47.580541   11676 main.go:141] libmachine: Making call to close driver server
	I0913 18:21:47.580548   11676 main.go:141] libmachine: (addons-084503) Calling .Close
	I0913 18:21:47.580556   11676 main.go:141] libmachine: Making call to close driver server
	I0913 18:21:47.580564   11676 main.go:141] libmachine: (addons-084503) Calling .Close
	I0913 18:21:47.580549   11676 main.go:141] libmachine: (addons-084503) Calling .Close
	I0913 18:21:47.580743   11676 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:21:47.580754   11676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:21:47.580791   11676 main.go:141] libmachine: (addons-084503) DBG | Closing plugin on server side
	I0913 18:21:47.580805   11676 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:21:47.580811   11676 main.go:141] libmachine: (addons-084503) DBG | Closing plugin on server side
	I0913 18:21:47.580815   11676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:21:47.580822   11676 main.go:141] libmachine: Making call to close driver server
	I0913 18:21:47.580829   11676 main.go:141] libmachine: (addons-084503) Calling .Close
	I0913 18:21:47.580831   11676 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:21:47.580837   11676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:21:47.580845   11676 addons.go:475] Verifying addon ingress=true in "addons-084503"
	I0913 18:21:47.582425   11676 main.go:141] libmachine: (addons-084503) DBG | Closing plugin on server side
	I0913 18:21:47.582455   11676 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:21:47.582469   11676 main.go:141] libmachine: (addons-084503) DBG | Closing plugin on server side
	I0913 18:21:47.582472   11676 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:21:47.582482   11676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:21:47.582482   11676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:21:47.582489   11676 main.go:141] libmachine: Making call to close driver server
	I0913 18:21:47.582492   11676 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:21:47.582503   11676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:21:47.582512   11676 main.go:141] libmachine: Making call to close driver server
	I0913 18:21:47.582520   11676 main.go:141] libmachine: (addons-084503) Calling .Close
	I0913 18:21:47.582496   11676 main.go:141] libmachine: (addons-084503) Calling .Close
	I0913 18:21:47.582843   11676 main.go:141] libmachine: (addons-084503) DBG | Closing plugin on server side
	I0913 18:21:47.582850   11676 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:21:47.582888   11676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:21:47.582866   11676 main.go:141] libmachine: (addons-084503) DBG | Closing plugin on server side
	I0913 18:21:47.583731   11676 out.go:177] * Verifying ingress addon...
	I0913 18:21:47.584489   11676 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:21:47.584506   11676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:21:47.586281   11676 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0913 18:21:47.641331   11676 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0913 18:21:47.641353   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:21:47.669951   11676 main.go:141] libmachine: Making call to close driver server
	I0913 18:21:47.669975   11676 main.go:141] libmachine: (addons-084503) Calling .Close
	I0913 18:21:47.670204   11676 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:21:47.670223   11676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:21:48.105629   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:21:48.614555   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:21:48.980973   11676 pod_ready.go:103] pod "coredns-7c65d6cfc9-b6b4x" in "kube-system" namespace has status "Ready":"False"
	I0913 18:21:49.195272   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:21:49.626415   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:21:50.102822   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:21:50.307404   11676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (11.974785989s)
	I0913 18:21:50.307454   11676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (11.747336192s)
	I0913 18:21:50.307477   11676 main.go:141] libmachine: Making call to close driver server
	I0913 18:21:50.307493   11676 main.go:141] libmachine: (addons-084503) Calling .Close
	I0913 18:21:50.307495   11676 main.go:141] libmachine: Making call to close driver server
	I0913 18:21:50.307510   11676 main.go:141] libmachine: (addons-084503) Calling .Close
	I0913 18:21:50.307530   11676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (11.656516624s)
	I0913 18:21:50.307556   11676 main.go:141] libmachine: Making call to close driver server
	I0913 18:21:50.307613   11676 main.go:141] libmachine: (addons-084503) Calling .Close
	I0913 18:21:50.307680   11676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (11.43044834s)
	W0913 18:21:50.307723   11676 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0913 18:21:50.307768   11676 retry.go:31] will retry after 182.976136ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0913 18:21:50.307824   11676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (10.481053259s)
	I0913 18:21:50.307861   11676 main.go:141] libmachine: Making call to close driver server
	I0913 18:21:50.307886   11676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (9.885958575s)
	I0913 18:21:50.307908   11676 main.go:141] libmachine: Making call to close driver server
	I0913 18:21:50.307929   11676 main.go:141] libmachine: (addons-084503) Calling .Close
	I0913 18:21:50.307890   11676 main.go:141] libmachine: (addons-084503) Calling .Close
	I0913 18:21:50.308280   11676 main.go:141] libmachine: (addons-084503) DBG | Closing plugin on server side
	I0913 18:21:50.308283   11676 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:21:50.308299   11676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:21:50.308309   11676 main.go:141] libmachine: Making call to close driver server
	I0913 18:21:50.308314   11676 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:21:50.308316   11676 main.go:141] libmachine: (addons-084503) Calling .Close
	I0913 18:21:50.308321   11676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:21:50.308329   11676 main.go:141] libmachine: Making call to close driver server
	I0913 18:21:50.308336   11676 main.go:141] libmachine: (addons-084503) Calling .Close
	I0913 18:21:50.308386   11676 main.go:141] libmachine: (addons-084503) DBG | Closing plugin on server side
	I0913 18:21:50.308406   11676 main.go:141] libmachine: (addons-084503) DBG | Closing plugin on server side
	I0913 18:21:50.308427   11676 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:21:50.308434   11676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:21:50.308442   11676 main.go:141] libmachine: Making call to close driver server
	I0913 18:21:50.308448   11676 main.go:141] libmachine: (addons-084503) Calling .Close
	I0913 18:21:50.308564   11676 main.go:141] libmachine: (addons-084503) DBG | Closing plugin on server side
	I0913 18:21:50.308589   11676 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:21:50.308596   11676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:21:50.308718   11676 main.go:141] libmachine: (addons-084503) DBG | Closing plugin on server side
	I0913 18:21:50.308740   11676 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:21:50.308746   11676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:21:50.308752   11676 main.go:141] libmachine: Making call to close driver server
	I0913 18:21:50.308758   11676 main.go:141] libmachine: (addons-084503) Calling .Close
	I0913 18:21:50.308957   11676 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:21:50.308967   11676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:21:50.308677   11676 main.go:141] libmachine: (addons-084503) DBG | Closing plugin on server side
	I0913 18:21:50.308699   11676 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:21:50.308992   11676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:21:50.309002   11676 addons.go:475] Verifying addon metrics-server=true in "addons-084503"
	I0913 18:21:50.309544   11676 main.go:141] libmachine: (addons-084503) DBG | Closing plugin on server side
	I0913 18:21:50.309568   11676 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:21:50.309574   11676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:21:50.309750   11676 main.go:141] libmachine: (addons-084503) DBG | Closing plugin on server side
	I0913 18:21:50.308975   11676 main.go:141] libmachine: Making call to close driver server
	I0913 18:21:50.309780   11676 main.go:141] libmachine: (addons-084503) Calling .Close
	I0913 18:21:50.309788   11676 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:21:50.309800   11676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:21:50.309963   11676 main.go:141] libmachine: (addons-084503) DBG | Closing plugin on server side
	I0913 18:21:50.309970   11676 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:21:50.309980   11676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:21:50.310005   11676 addons.go:475] Verifying addon registry=true in "addons-084503"
	I0913 18:21:50.310333   11676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (12.001025245s)
	I0913 18:21:50.310785   11676 main.go:141] libmachine: Making call to close driver server
	I0913 18:21:50.310795   11676 main.go:141] libmachine: (addons-084503) Calling .Close
	I0913 18:21:50.311010   11676 main.go:141] libmachine: (addons-084503) DBG | Closing plugin on server side
	I0913 18:21:50.311037   11676 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:21:50.311043   11676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:21:50.311051   11676 main.go:141] libmachine: Making call to close driver server
	I0913 18:21:50.311056   11676 main.go:141] libmachine: (addons-084503) Calling .Close
	I0913 18:21:50.311367   11676 main.go:141] libmachine: (addons-084503) DBG | Closing plugin on server side
	I0913 18:21:50.311405   11676 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:21:50.311421   11676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:21:50.311448   11676 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-084503 service yakd-dashboard -n yakd-dashboard
	
	I0913 18:21:50.312292   11676 out.go:177] * Verifying registry addon...
	I0913 18:21:50.314510   11676 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0913 18:21:50.335340   11676 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0913 18:21:50.335362   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:21:50.408211   11676 main.go:141] libmachine: Making call to close driver server
	I0913 18:21:50.408234   11676 main.go:141] libmachine: (addons-084503) Calling .Close
	I0913 18:21:50.408519   11676 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:21:50.408536   11676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:21:50.491379   11676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0913 18:21:50.625372   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:21:50.826235   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:21:51.099406   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:21:51.158509   11676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (10.038637235s)
	I0913 18:21:51.158562   11676 main.go:141] libmachine: Making call to close driver server
	I0913 18:21:51.158575   11676 main.go:141] libmachine: (addons-084503) Calling .Close
	I0913 18:21:51.158514   11676 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (5.463708223s)
	I0913 18:21:51.158898   11676 main.go:141] libmachine: (addons-084503) DBG | Closing plugin on server side
	I0913 18:21:51.158954   11676 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:21:51.158971   11676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:21:51.158985   11676 main.go:141] libmachine: Making call to close driver server
	I0913 18:21:51.158992   11676 main.go:141] libmachine: (addons-084503) Calling .Close
	I0913 18:21:51.159204   11676 main.go:141] libmachine: (addons-084503) DBG | Closing plugin on server side
	I0913 18:21:51.159181   11676 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:21:51.159221   11676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:21:51.159231   11676 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-084503"
	I0913 18:21:51.159815   11676 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0913 18:21:51.161248   11676 out.go:177] * Verifying csi-hostpath-driver addon...
	I0913 18:21:51.162210   11676 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0913 18:21:51.162910   11676 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0913 18:21:51.163233   11676 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0913 18:21:51.163250   11676 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0913 18:21:51.201525   11676 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0913 18:21:51.201609   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:21:51.219460   11676 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0913 18:21:51.219485   11676 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0913 18:21:51.324204   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:21:51.350132   11676 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0913 18:21:51.350154   11676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0913 18:21:51.405711   11676 pod_ready.go:103] pod "coredns-7c65d6cfc9-b6b4x" in "kube-system" namespace has status "Ready":"False"
	I0913 18:21:51.474931   11676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0913 18:21:51.591671   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:21:51.680139   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:21:51.817748   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:21:52.091781   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:21:52.166831   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:21:52.319474   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:21:52.534223   11676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.042798487s)
	I0913 18:21:52.534290   11676 main.go:141] libmachine: Making call to close driver server
	I0913 18:21:52.534306   11676 main.go:141] libmachine: (addons-084503) Calling .Close
	I0913 18:21:52.534586   11676 main.go:141] libmachine: (addons-084503) DBG | Closing plugin on server side
	I0913 18:21:52.534631   11676 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:21:52.534643   11676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:21:52.534652   11676 main.go:141] libmachine: Making call to close driver server
	I0913 18:21:52.534662   11676 main.go:141] libmachine: (addons-084503) Calling .Close
	I0913 18:21:52.534913   11676 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:21:52.534929   11676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:21:52.591363   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:21:52.691907   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:21:52.830906   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:21:53.020416   11676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.545447902s)
	I0913 18:21:53.020465   11676 main.go:141] libmachine: Making call to close driver server
	I0913 18:21:53.020481   11676 main.go:141] libmachine: (addons-084503) Calling .Close
	I0913 18:21:53.020743   11676 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:21:53.020785   11676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:21:53.020805   11676 main.go:141] libmachine: Making call to close driver server
	I0913 18:21:53.020826   11676 main.go:141] libmachine: (addons-084503) Calling .Close
	I0913 18:21:53.020789   11676 main.go:141] libmachine: (addons-084503) DBG | Closing plugin on server side
	I0913 18:21:53.021036   11676 main.go:141] libmachine: Successfully made call to close driver server
	I0913 18:21:53.021073   11676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0913 18:21:53.021092   11676 main.go:141] libmachine: (addons-084503) DBG | Closing plugin on server side
	I0913 18:21:53.023183   11676 addons.go:475] Verifying addon gcp-auth=true in "addons-084503"
	I0913 18:21:53.024653   11676 out.go:177] * Verifying gcp-auth addon...
	I0913 18:21:53.026427   11676 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0913 18:21:53.033234   11676 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0913 18:21:53.134274   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:21:53.236065   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:21:53.336411   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:21:53.590379   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:21:53.670383   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:21:53.817852   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:21:53.891286   11676 pod_ready.go:103] pod "coredns-7c65d6cfc9-b6b4x" in "kube-system" namespace has status "Ready":"False"
	I0913 18:21:54.090154   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:21:54.167481   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:21:54.318392   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:21:54.589957   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:21:54.667540   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:21:54.818914   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:21:55.090393   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:21:55.168060   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:21:55.318937   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:21:55.632379   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:21:55.666741   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:21:55.818858   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:21:56.132255   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:21:56.168070   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:21:56.318131   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:21:56.391120   11676 pod_ready.go:103] pod "coredns-7c65d6cfc9-b6b4x" in "kube-system" namespace has status "Ready":"False"
	I0913 18:21:56.590167   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:21:56.667577   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:21:57.139934   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:21:57.140244   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:21:57.168968   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:21:57.319182   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:21:57.590980   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:21:57.668342   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:21:57.818714   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:21:58.091052   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:21:58.167888   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:21:58.318610   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:21:58.391250   11676 pod_ready.go:103] pod "coredns-7c65d6cfc9-b6b4x" in "kube-system" namespace has status "Ready":"False"
	I0913 18:21:58.802776   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:21:58.802951   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:21:58.820375   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:21:59.093998   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:21:59.167078   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:21:59.319478   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:21:59.590659   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:21:59.668873   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:21:59.821118   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:00.132647   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:00.168001   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:00.318407   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:00.391394   11676 pod_ready.go:103] pod "coredns-7c65d6cfc9-b6b4x" in "kube-system" namespace has status "Ready":"False"
	I0913 18:22:00.591705   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:00.667646   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:00.818185   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:01.092167   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:01.167560   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:01.318229   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:01.590776   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:01.667819   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:01.819088   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:02.090463   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:02.167756   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:02.319317   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:02.391922   11676 pod_ready.go:103] pod "coredns-7c65d6cfc9-b6b4x" in "kube-system" namespace has status "Ready":"False"
	I0913 18:22:02.590481   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:02.668469   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:02.819105   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:03.288345   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:03.288528   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:03.390569   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:03.591133   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:03.668154   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:03.819050   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:04.092244   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:04.168004   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:04.319082   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:04.590617   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:04.667034   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:04.818550   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:04.890502   11676 pod_ready.go:103] pod "coredns-7c65d6cfc9-b6b4x" in "kube-system" namespace has status "Ready":"False"
	I0913 18:22:05.090862   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:05.167449   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:05.318056   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:05.590703   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:05.668137   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:05.818923   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:06.090784   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:06.168272   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:06.318996   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:06.591156   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:06.666974   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:06.818776   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:06.896591   11676 pod_ready.go:103] pod "coredns-7c65d6cfc9-b6b4x" in "kube-system" namespace has status "Ready":"False"
	I0913 18:22:07.105660   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:07.181654   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:07.321472   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:07.590933   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:07.668553   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:07.817529   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:08.132572   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:08.167160   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:08.318092   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:08.590933   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:08.668297   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:08.817819   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:09.090446   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:09.167594   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:09.318116   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:09.391466   11676 pod_ready.go:103] pod "coredns-7c65d6cfc9-b6b4x" in "kube-system" namespace has status "Ready":"False"
	I0913 18:22:09.591344   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:09.667674   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:09.818172   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:10.091509   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:10.168438   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:10.318293   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:10.592319   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:10.667925   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:10.903133   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:11.090720   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:11.167772   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:11.317788   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:11.590144   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:11.667443   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:11.818888   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:12.234406   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:12.234524   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:12.236683   11676 pod_ready.go:103] pod "coredns-7c65d6cfc9-b6b4x" in "kube-system" namespace has status "Ready":"False"
	I0913 18:22:12.320580   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:12.412526   11676 pod_ready.go:93] pod "coredns-7c65d6cfc9-b6b4x" in "kube-system" namespace has status "Ready":"True"
	I0913 18:22:12.412555   11676 pod_ready.go:82] duration metric: took 34.027712315s for pod "coredns-7c65d6cfc9-b6b4x" in "kube-system" namespace to be "Ready" ...
	I0913 18:22:12.412567   11676 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-qlz2x" in "kube-system" namespace to be "Ready" ...
	I0913 18:22:12.419284   11676 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-qlz2x" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-qlz2x" not found
	I0913 18:22:12.419336   11676 pod_ready.go:82] duration metric: took 6.75944ms for pod "coredns-7c65d6cfc9-qlz2x" in "kube-system" namespace to be "Ready" ...
	E0913 18:22:12.419350   11676 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-qlz2x" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-qlz2x" not found
	I0913 18:22:12.419375   11676 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-084503" in "kube-system" namespace to be "Ready" ...
	I0913 18:22:12.439162   11676 pod_ready.go:93] pod "etcd-addons-084503" in "kube-system" namespace has status "Ready":"True"
	I0913 18:22:12.439192   11676 pod_ready.go:82] duration metric: took 19.8087ms for pod "etcd-addons-084503" in "kube-system" namespace to be "Ready" ...
	I0913 18:22:12.439205   11676 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-084503" in "kube-system" namespace to be "Ready" ...
	I0913 18:22:12.453312   11676 pod_ready.go:93] pod "kube-apiserver-addons-084503" in "kube-system" namespace has status "Ready":"True"
	I0913 18:22:12.453344   11676 pod_ready.go:82] duration metric: took 14.129095ms for pod "kube-apiserver-addons-084503" in "kube-system" namespace to be "Ready" ...
	I0913 18:22:12.453358   11676 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-084503" in "kube-system" namespace to be "Ready" ...
	I0913 18:22:12.460063   11676 pod_ready.go:93] pod "kube-controller-manager-addons-084503" in "kube-system" namespace has status "Ready":"True"
	I0913 18:22:12.460092   11676 pod_ready.go:82] duration metric: took 6.725273ms for pod "kube-controller-manager-addons-084503" in "kube-system" namespace to be "Ready" ...
	I0913 18:22:12.460107   11676 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-cjzhz" in "kube-system" namespace to be "Ready" ...
	I0913 18:22:12.591501   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:12.638109   11676 pod_ready.go:93] pod "kube-proxy-cjzhz" in "kube-system" namespace has status "Ready":"True"
	I0913 18:22:12.638148   11676 pod_ready.go:82] duration metric: took 178.033958ms for pod "kube-proxy-cjzhz" in "kube-system" namespace to be "Ready" ...
	I0913 18:22:12.638166   11676 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-084503" in "kube-system" namespace to be "Ready" ...
	I0913 18:22:12.675868   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:12.819829   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:13.039679   11676 pod_ready.go:93] pod "kube-scheduler-addons-084503" in "kube-system" namespace has status "Ready":"True"
	I0913 18:22:13.039849   11676 pod_ready.go:82] duration metric: took 401.65738ms for pod "kube-scheduler-addons-084503" in "kube-system" namespace to be "Ready" ...
	I0913 18:22:13.039873   11676 pod_ready.go:39] duration metric: took 34.661798726s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 18:22:13.039930   11676 api_server.go:52] waiting for apiserver process to appear ...
	I0913 18:22:13.040026   11676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 18:22:13.090152   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:13.090464   11676 api_server.go:72] duration metric: took 35.55676012s to wait for apiserver process to appear ...
	I0913 18:22:13.090490   11676 api_server.go:88] waiting for apiserver healthz status ...
	I0913 18:22:13.090513   11676 api_server.go:253] Checking apiserver healthz at https://192.168.39.228:8443/healthz ...
	I0913 18:22:13.097235   11676 api_server.go:279] https://192.168.39.228:8443/healthz returned 200:
	ok
	I0913 18:22:13.098820   11676 api_server.go:141] control plane version: v1.31.1
	I0913 18:22:13.098862   11676 api_server.go:131] duration metric: took 8.361833ms to wait for apiserver health ...
	I0913 18:22:13.098875   11676 system_pods.go:43] waiting for kube-system pods to appear ...
	I0913 18:22:13.168409   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:13.245979   11676 system_pods.go:59] 17 kube-system pods found
	I0913 18:22:13.246035   11676 system_pods.go:61] "coredns-7c65d6cfc9-b6b4x" [39b41c4b-4e8a-4497-9d9f-de1fa4bb7160] Running
	I0913 18:22:13.246049   11676 system_pods.go:61] "csi-hostpath-attacher-0" [9455dfd1-ddeb-4809-ba05-440cd928f01d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0913 18:22:13.246060   11676 system_pods.go:61] "csi-hostpath-resizer-0" [8c085121-16fa-41a0-b392-05a3355f69a7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0913 18:22:13.246074   11676 system_pods.go:61] "csi-hostpathplugin-n6wj7" [67f13833-3361-4173-a17b-fc79605b9d72] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0913 18:22:13.246081   11676 system_pods.go:61] "etcd-addons-084503" [b1e04e92-5224-4016-80a4-2ba5d0540793] Running
	I0913 18:22:13.246088   11676 system_pods.go:61] "kube-apiserver-addons-084503" [1cce0338-fc03-41b8-b2bf-0b1114326e1a] Running
	I0913 18:22:13.246093   11676 system_pods.go:61] "kube-controller-manager-addons-084503" [072ae0ba-6e6e-49f6-8e31-840db996bd43] Running
	I0913 18:22:13.246099   11676 system_pods.go:61] "kube-ingress-dns-minikube" [6d00a6f3-3f92-42f3-b67d-d9952971cdda] Running
	I0913 18:22:13.246105   11676 system_pods.go:61] "kube-proxy-cjzhz" [5c7aaaa8-e9d7-4889-bfea-033bd2662b97] Running
	I0913 18:22:13.246110   11676 system_pods.go:61] "kube-scheduler-addons-084503" [e8aa1f73-e6e4-4599-a6be-ebf004a007ff] Running
	I0913 18:22:13.246118   11676 system_pods.go:61] "metrics-server-84c5f94fbc-g9d9c" [b58dd703-5930-4049-89a0-e44fc582da9a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0913 18:22:13.246124   11676 system_pods.go:61] "nvidia-device-plugin-daemonset-628zk" [c1f45629-5794-47b5-ad6f-1ba8de29946d] Running
	I0913 18:22:13.246132   11676 system_pods.go:61] "registry-66c9cd494c-d5lpq" [8c735e84-f4ab-4bf1-aad6-c5a4d187b69d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0913 18:22:13.246147   11676 system_pods.go:61] "registry-proxy-9sz2r" [52fdf99f-a086-42c5-88fc-da0c47c197d1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0913 18:22:13.246159   11676 system_pods.go:61] "snapshot-controller-56fcc65765-bhkz7" [81280606-30de-49d7-a7ae-a037414171cc] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0913 18:22:13.246171   11676 system_pods.go:61] "snapshot-controller-56fcc65765-z6sqs" [5eabd051-116c-460f-ac63-e8658aa7bb25] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0913 18:22:13.246179   11676 system_pods.go:61] "storage-provisioner" [59583020-f55a-4106-83da-1f2308ecf966] Running
	I0913 18:22:13.246189   11676 system_pods.go:74] duration metric: took 147.305766ms to wait for pod list to return data ...
	I0913 18:22:13.246202   11676 default_sa.go:34] waiting for default service account to be created ...
	I0913 18:22:13.318621   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:13.437673   11676 default_sa.go:45] found service account: "default"
	I0913 18:22:13.437710   11676 default_sa.go:55] duration metric: took 191.49742ms for default service account to be created ...
	I0913 18:22:13.437722   11676 system_pods.go:116] waiting for k8s-apps to be running ...
	I0913 18:22:13.591566   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:13.647152   11676 system_pods.go:86] 17 kube-system pods found
	I0913 18:22:13.647199   11676 system_pods.go:89] "coredns-7c65d6cfc9-b6b4x" [39b41c4b-4e8a-4497-9d9f-de1fa4bb7160] Running
	I0913 18:22:13.647213   11676 system_pods.go:89] "csi-hostpath-attacher-0" [9455dfd1-ddeb-4809-ba05-440cd928f01d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0913 18:22:13.647223   11676 system_pods.go:89] "csi-hostpath-resizer-0" [8c085121-16fa-41a0-b392-05a3355f69a7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0913 18:22:13.647236   11676 system_pods.go:89] "csi-hostpathplugin-n6wj7" [67f13833-3361-4173-a17b-fc79605b9d72] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0913 18:22:13.647243   11676 system_pods.go:89] "etcd-addons-084503" [b1e04e92-5224-4016-80a4-2ba5d0540793] Running
	I0913 18:22:13.647249   11676 system_pods.go:89] "kube-apiserver-addons-084503" [1cce0338-fc03-41b8-b2bf-0b1114326e1a] Running
	I0913 18:22:13.647255   11676 system_pods.go:89] "kube-controller-manager-addons-084503" [072ae0ba-6e6e-49f6-8e31-840db996bd43] Running
	I0913 18:22:13.647262   11676 system_pods.go:89] "kube-ingress-dns-minikube" [6d00a6f3-3f92-42f3-b67d-d9952971cdda] Running
	I0913 18:22:13.647267   11676 system_pods.go:89] "kube-proxy-cjzhz" [5c7aaaa8-e9d7-4889-bfea-033bd2662b97] Running
	I0913 18:22:13.647272   11676 system_pods.go:89] "kube-scheduler-addons-084503" [e8aa1f73-e6e4-4599-a6be-ebf004a007ff] Running
	I0913 18:22:13.647280   11676 system_pods.go:89] "metrics-server-84c5f94fbc-g9d9c" [b58dd703-5930-4049-89a0-e44fc582da9a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0913 18:22:13.647287   11676 system_pods.go:89] "nvidia-device-plugin-daemonset-628zk" [c1f45629-5794-47b5-ad6f-1ba8de29946d] Running
	I0913 18:22:13.647296   11676 system_pods.go:89] "registry-66c9cd494c-d5lpq" [8c735e84-f4ab-4bf1-aad6-c5a4d187b69d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0913 18:22:13.647307   11676 system_pods.go:89] "registry-proxy-9sz2r" [52fdf99f-a086-42c5-88fc-da0c47c197d1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0913 18:22:13.647316   11676 system_pods.go:89] "snapshot-controller-56fcc65765-bhkz7" [81280606-30de-49d7-a7ae-a037414171cc] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0913 18:22:13.647325   11676 system_pods.go:89] "snapshot-controller-56fcc65765-z6sqs" [5eabd051-116c-460f-ac63-e8658aa7bb25] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0913 18:22:13.647335   11676 system_pods.go:89] "storage-provisioner" [59583020-f55a-4106-83da-1f2308ecf966] Running
	I0913 18:22:13.647345   11676 system_pods.go:126] duration metric: took 209.614984ms to wait for k8s-apps to be running ...
	I0913 18:22:13.647354   11676 system_svc.go:44] waiting for kubelet service to be running ....
	I0913 18:22:13.647412   11676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 18:22:13.668506   11676 system_svc.go:56] duration metric: took 21.138754ms WaitForService to wait for kubelet
	I0913 18:22:13.668542   11676 kubeadm.go:582] duration metric: took 36.134841795s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 18:22:13.668566   11676 node_conditions.go:102] verifying NodePressure condition ...
	I0913 18:22:13.675459   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:13.819348   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:13.837795   11676 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0913 18:22:13.837835   11676 node_conditions.go:123] node cpu capacity is 2
	I0913 18:22:13.837851   11676 node_conditions.go:105] duration metric: took 169.278311ms to run NodePressure ...
	I0913 18:22:13.837869   11676 start.go:241] waiting for startup goroutines ...
	I0913 18:22:14.150967   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:14.168250   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:14.318719   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:14.591004   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:14.667347   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:14.818689   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:15.091177   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:15.167965   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:15.333041   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:15.593628   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:15.668089   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:15.818884   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:16.128799   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:16.231398   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:16.330139   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:16.590316   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:16.667914   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:16.819318   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:17.091116   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:17.168846   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:17.317773   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:17.590045   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:17.667511   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:17.818828   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:18.123174   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:18.169573   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:18.319216   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:18.591030   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:18.667399   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:18.817926   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:19.090325   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:19.168165   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:19.318422   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:19.590762   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:19.668018   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:19.818263   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:20.090976   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:20.167189   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:20.318389   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:20.591472   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:20.668042   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:20.818157   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:21.090682   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:21.167864   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:21.318456   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:21.591212   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:21.667903   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:21.818533   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:22.091013   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:22.167627   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:22.318053   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:22.590450   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:22.667293   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:22.819514   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:23.091701   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:23.168713   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:23.318110   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:23.764775   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:23.764940   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:23.818134   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:24.137108   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:24.168747   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:24.318517   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:24.797978   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:24.800947   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:24.820244   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:25.091069   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:25.167736   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:25.318463   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:25.592292   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:25.668134   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:25.824366   11676 kapi.go:107] duration metric: took 35.509851958s to wait for kubernetes.io/minikube-addons=registry ...
	I0913 18:22:26.090548   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:26.167527   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:26.591315   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:26.668128   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:27.091043   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:27.167079   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:27.590079   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:27.670279   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:28.092241   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:28.167613   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:28.590432   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:28.667862   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:29.360796   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:29.360859   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:29.593722   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:29.671426   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:30.091880   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:30.167479   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:30.590385   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:30.669341   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:31.091407   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:31.168615   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:31.632510   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:31.667647   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:32.090823   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:32.167033   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:32.589975   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:32.667305   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:33.091969   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:33.167864   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:33.590415   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:33.668523   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:34.144271   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:34.167952   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:34.697179   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:34.697177   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:35.091960   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:35.167615   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:35.590584   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:35.668017   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:36.135016   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:36.167273   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:36.594920   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:36.669000   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:37.090415   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:37.167855   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:37.591287   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:37.667521   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:38.092167   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:38.223410   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:38.635494   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:38.668579   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:39.090523   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:39.168108   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:39.591492   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:39.667772   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:40.091161   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:40.167536   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:40.590724   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:40.667732   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:41.091679   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:41.167555   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:41.592742   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:41.667521   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:42.091334   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:42.167506   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:42.591172   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:42.667657   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:43.091575   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:43.168913   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:43.611200   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:43.715963   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:44.093439   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:44.168327   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:44.590820   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:44.669456   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:45.091240   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:45.168068   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:45.590733   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:45.667324   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:46.146995   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:46.234638   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:46.590920   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:46.669957   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:47.108504   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:47.201136   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:47.591398   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:47.668236   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:48.090380   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:48.167963   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:48.590553   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:48.667691   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:49.090773   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:49.167176   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:49.592727   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:49.667220   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:50.090855   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:50.168215   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:50.590638   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:50.667805   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:51.096452   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:51.203408   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:51.591477   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:51.667526   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:52.094168   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:52.168459   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:52.590833   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:52.667537   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:53.137240   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:53.168187   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:53.590527   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:53.668058   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:54.090508   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:54.167712   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:54.590467   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:54.667683   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:55.132704   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:55.166900   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:55.590980   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:55.669734   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:56.138040   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:56.173299   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:56.591858   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:56.668342   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:57.091359   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:57.167495   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:57.590417   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:57.691065   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:58.090646   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:58.168222   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:58.590955   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:58.667213   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:59.091136   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:59.168097   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:59.590387   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:59.667767   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:00.091167   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:00.168130   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:00.591050   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:00.667024   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:01.090473   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:01.167627   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:01.591256   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:01.667585   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:02.377858   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:02.380971   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:02.636119   11676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:02.669649   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:03.135653   11676 kapi.go:107] duration metric: took 1m15.549369776s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0913 18:23:03.168202   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:03.667123   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:04.167486   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:04.669137   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:05.168310   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:05.667972   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:06.167587   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:06.669309   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:07.168697   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:07.669848   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:08.202540   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:08.667972   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:09.166912   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:09.699797   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:10.167171   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:10.667645   11676 kapi.go:107] duration metric: took 1m19.504734392s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0913 18:23:15.045247   11676 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0913 18:23:15.045270   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:15.529602   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:16.038121   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:16.536947   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:17.031315   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:17.533813   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:18.030313   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:18.530783   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:19.031138   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:19.530503   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:20.030932   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:20.530473   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:21.030057   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:21.530454   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:22.031502   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:22.530281   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:23.031356   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:23.530963   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:24.031293   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:24.530957   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:25.030777   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:25.531225   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:26.030839   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:26.530787   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:27.031796   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:27.530848   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:28.030777   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:28.530497   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:29.030076   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:29.530590   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:30.030677   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:30.530754   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:31.031105   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:31.530974   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:32.030662   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:32.534048   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:33.030504   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:33.530338   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:34.030332   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:34.529359   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:35.029895   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:35.530977   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:36.031516   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:36.529760   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:37.030820   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:37.530417   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:38.029878   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:38.531254   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:39.030112   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:39.531249   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:40.029807   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:40.530546   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:41.029861   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:41.530273   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:42.030401   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:42.529845   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:43.035454   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:43.530202   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:44.031888   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:44.530516   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:45.030472   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:45.530249   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:46.030908   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:46.530815   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:47.030574   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:47.529866   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:48.030253   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:48.530735   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:49.030769   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:49.530090   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:50.030110   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:50.530484   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:51.031084   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:51.530694   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:52.031210   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:52.534576   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:53.029992   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:53.530007   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:54.030924   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:54.530529   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:55.029927   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:55.530714   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:56.041900   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:56.531502   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:57.031586   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:57.529804   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:58.030377   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:58.533992   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:59.030777   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:59.529899   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:00.031112   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:00.530608   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:01.030645   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:01.530385   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:02.030971   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:02.530235   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:03.032698   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:03.529992   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:04.030654   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:04.530528   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:05.030177   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:05.530383   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:06.029776   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:06.530592   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:07.030202   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:07.529618   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:08.030013   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:08.537216   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:09.030948   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:09.530545   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:10.030651   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:10.531067   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:11.030822   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:11.529904   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:12.031358   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:12.530387   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:13.031613   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:13.530709   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:14.030209   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:14.529880   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:15.030763   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:15.530253   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:16.033618   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:16.530682   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:17.031134   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:17.530825   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:18.030517   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:18.530300   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:19.031398   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:19.529754   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:20.030563   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:20.530787   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:21.031448   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:21.529633   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:22.030502   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:22.531068   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:23.030709   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:23.530868   11676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:24.031116   11676 kapi.go:107] duration metric: took 2m31.004687657s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0913 18:24:24.032809   11676 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-084503 cluster.
	I0913 18:24:24.033959   11676 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0913 18:24:24.035236   11676 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0913 18:24:24.036477   11676 out.go:177] * Enabled addons: ingress-dns, storage-provisioner, nvidia-device-plugin, cloud-spanner, default-storageclass, metrics-server, inspektor-gadget, volcano, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0913 18:24:24.037605   11676 addons.go:510] duration metric: took 2m46.503876534s for enable addons: enabled=[ingress-dns storage-provisioner nvidia-device-plugin cloud-spanner default-storageclass metrics-server inspektor-gadget volcano yakd storage-provisioner-rancher volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0913 18:24:24.037641   11676 start.go:246] waiting for cluster config update ...
	I0913 18:24:24.037658   11676 start.go:255] writing updated cluster config ...
	I0913 18:24:24.037915   11676 ssh_runner.go:195] Run: rm -f paused
	I0913 18:24:24.092105   11676 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0913 18:24:24.093786   11676 out.go:177] * Done! kubectl is now configured to use "addons-084503" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 13 18:34:11 addons-084503 dockerd[1280]: time="2024-09-13T18:34:11.994751531Z" level=info msg="shim disconnected" id=1105c0260fd2cf1e6acffadd8e03b72bbf3976c86dd8edbae8b41424c6682d26 namespace=moby
	Sep 13 18:34:11 addons-084503 dockerd[1280]: time="2024-09-13T18:34:11.995642174Z" level=warning msg="cleaning up after shim disconnected" id=1105c0260fd2cf1e6acffadd8e03b72bbf3976c86dd8edbae8b41424c6682d26 namespace=moby
	Sep 13 18:34:11 addons-084503 dockerd[1280]: time="2024-09-13T18:34:11.995674287Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 13 18:34:19 addons-084503 dockerd[1274]: time="2024-09-13T18:34:19.844309094Z" level=info msg="ignoring event" container=c72fe517bd65ab0b9014f9ac88eb1b25e4b396dde54f0269de1bd6c3219f89f9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 13 18:34:19 addons-084503 dockerd[1280]: time="2024-09-13T18:34:19.847267115Z" level=info msg="shim disconnected" id=c72fe517bd65ab0b9014f9ac88eb1b25e4b396dde54f0269de1bd6c3219f89f9 namespace=moby
	Sep 13 18:34:19 addons-084503 dockerd[1280]: time="2024-09-13T18:34:19.847459389Z" level=warning msg="cleaning up after shim disconnected" id=c72fe517bd65ab0b9014f9ac88eb1b25e4b396dde54f0269de1bd6c3219f89f9 namespace=moby
	Sep 13 18:34:19 addons-084503 dockerd[1280]: time="2024-09-13T18:34:19.847493352Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 13 18:34:20 addons-084503 dockerd[1280]: time="2024-09-13T18:34:20.314068810Z" level=info msg="shim disconnected" id=b7faa0820fd4effa343d5b37bf9ad5675816b1448134159241e8b63e3dfa4c25 namespace=moby
	Sep 13 18:34:20 addons-084503 dockerd[1280]: time="2024-09-13T18:34:20.314125298Z" level=warning msg="cleaning up after shim disconnected" id=b7faa0820fd4effa343d5b37bf9ad5675816b1448134159241e8b63e3dfa4c25 namespace=moby
	Sep 13 18:34:20 addons-084503 dockerd[1280]: time="2024-09-13T18:34:20.314150945Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 13 18:34:20 addons-084503 dockerd[1274]: time="2024-09-13T18:34:20.315123049Z" level=info msg="ignoring event" container=b7faa0820fd4effa343d5b37bf9ad5675816b1448134159241e8b63e3dfa4c25 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 13 18:34:20 addons-084503 dockerd[1280]: time="2024-09-13T18:34:20.429172696Z" level=info msg="shim disconnected" id=b7406e04890db3bcf1f5cc44925a0c5fa69b981e966479a2744a75b1909aff74 namespace=moby
	Sep 13 18:34:20 addons-084503 dockerd[1280]: time="2024-09-13T18:34:20.429623822Z" level=warning msg="cleaning up after shim disconnected" id=b7406e04890db3bcf1f5cc44925a0c5fa69b981e966479a2744a75b1909aff74 namespace=moby
	Sep 13 18:34:20 addons-084503 dockerd[1280]: time="2024-09-13T18:34:20.429830886Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 13 18:34:20 addons-084503 dockerd[1274]: time="2024-09-13T18:34:20.430686792Z" level=info msg="ignoring event" container=b7406e04890db3bcf1f5cc44925a0c5fa69b981e966479a2744a75b1909aff74 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 13 18:34:20 addons-084503 dockerd[1274]: time="2024-09-13T18:34:20.493034402Z" level=info msg="ignoring event" container=2b72862663ad8bf0558c899d046749cb3faa024eeeb3478b1c737d5aa673c4bb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 13 18:34:20 addons-084503 dockerd[1280]: time="2024-09-13T18:34:20.493771566Z" level=info msg="shim disconnected" id=2b72862663ad8bf0558c899d046749cb3faa024eeeb3478b1c737d5aa673c4bb namespace=moby
	Sep 13 18:34:20 addons-084503 dockerd[1280]: time="2024-09-13T18:34:20.494200229Z" level=warning msg="cleaning up after shim disconnected" id=2b72862663ad8bf0558c899d046749cb3faa024eeeb3478b1c737d5aa673c4bb namespace=moby
	Sep 13 18:34:20 addons-084503 dockerd[1280]: time="2024-09-13T18:34:20.494252468Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 13 18:34:20 addons-084503 dockerd[1280]: time="2024-09-13T18:34:20.520697432Z" level=warning msg="cleanup warnings time=\"2024-09-13T18:34:20Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Sep 13 18:34:20 addons-084503 dockerd[1274]: time="2024-09-13T18:34:20.618870373Z" level=info msg="ignoring event" container=fe9f35fbad98c0ddd4e1462aaa10dc6278b873d455f5f3b7bbd63bad8392e7db module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 13 18:34:20 addons-084503 dockerd[1280]: time="2024-09-13T18:34:20.622099820Z" level=info msg="shim disconnected" id=fe9f35fbad98c0ddd4e1462aaa10dc6278b873d455f5f3b7bbd63bad8392e7db namespace=moby
	Sep 13 18:34:20 addons-084503 dockerd[1280]: time="2024-09-13T18:34:20.622165300Z" level=warning msg="cleaning up after shim disconnected" id=fe9f35fbad98c0ddd4e1462aaa10dc6278b873d455f5f3b7bbd63bad8392e7db namespace=moby
	Sep 13 18:34:20 addons-084503 dockerd[1280]: time="2024-09-13T18:34:20.622177020Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 13 18:34:20 addons-084503 dockerd[1280]: time="2024-09-13T18:34:20.652155672Z" level=warning msg="cleanup warnings time=\"2024-09-13T18:34:20Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7094ec1205862       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                                  13 seconds ago      Running             hello-world-app           0                   cdc3b7ae412c8       hello-world-app-55bf9c44b4-2b7dl
	a4f3e7f2280a7       nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf                                                21 seconds ago      Running             nginx                     0                   0a63b99c94e90       nginx
	3f41f23653e4d       a416a98b71e22                                                                                                                49 seconds ago      Exited              helper-pod                0                   69b3e6a292fdb       helper-pod-delete-pvc-a311ee20-76c9-43bb-aa4f-017e3c6d3a8c
	cad75b34f260c       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 9 minutes ago       Running             gcp-auth                  0                   9125bf1161ccf       gcp-auth-89d5ffd79-8qcqm
	6b80f3fd9e420       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              patch                     0                   233ba753d4135       ingress-nginx-admission-patch-cjxg2
	cce81531aa896       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              create                    0                   195d0cb994b2d       ingress-nginx-admission-create-8c6fv
	b7406e04890db       gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367              11 minutes ago      Exited              registry-proxy            0                   fe9f35fbad98c       registry-proxy-9sz2r
	b7faa0820fd4e       registry@sha256:ac0192b549007e22998eb74e8d8488dcfe70f1489520c3b144a6047ac5efbe90                                             12 minutes ago      Exited              registry                  0                   2b72862663ad8       registry-66c9cd494c-d5lpq
	1858626b43b40       6e38f40d628db                                                                                                                12 minutes ago      Running             storage-provisioner       0                   851d3d86e3ff6       storage-provisioner
	9bf659d4fcc31       c69fa2e9cbf5f                                                                                                                12 minutes ago      Running             coredns                   0                   f54e76e6b9f6d       coredns-7c65d6cfc9-b6b4x
	cf5e17f4d8e9d       60c005f310ff3                                                                                                                12 minutes ago      Running             kube-proxy                0                   a839ac34a5774       kube-proxy-cjzhz
	9e1012c72a6ec       9aa1fad941575                                                                                                                12 minutes ago      Running             kube-scheduler            0                   26d53151466a4       kube-scheduler-addons-084503
	8544c6b150de9       2e96e5913fc06                                                                                                                12 minutes ago      Running             etcd                      0                   c1aed73910cba       etcd-addons-084503
	616152cb3ada3       175ffd71cce3d                                                                                                                12 minutes ago      Running             kube-controller-manager   0                   07cf6dc68aed2       kube-controller-manager-addons-084503
	cfca34a798af8       6bab7719df100                                                                                                                12 minutes ago      Running             kube-apiserver            0                   3a80e81a5d76a       kube-apiserver-addons-084503
	
	
	==> coredns [9bf659d4fcc3] <==
	[INFO] 10.244.0.21:54489 - 29555 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000049848s
	[INFO] 10.244.0.21:50881 - 50764 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00007436s
	[INFO] 10.244.0.21:50881 - 20882 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000070892s
	[INFO] 10.244.0.21:54489 - 40052 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000069958s
	[INFO] 10.244.0.21:50881 - 56345 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000072924s
	[INFO] 10.244.0.21:54489 - 1305 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000113248s
	[INFO] 10.244.0.21:50881 - 54130 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000068962s
	[INFO] 10.244.0.21:54489 - 25963 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00004402s
	[INFO] 10.244.0.21:50881 - 3689 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000161666s
	[INFO] 10.244.0.21:54489 - 439 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000042279s
	[INFO] 10.244.0.21:54489 - 28952 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000067796s
	[INFO] 10.244.0.21:58125 - 36891 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000104839s
	[INFO] 10.244.0.21:58125 - 27521 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000085956s
	[INFO] 10.244.0.21:52203 - 49156 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000057978s
	[INFO] 10.244.0.21:52203 - 32728 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000138355s
	[INFO] 10.244.0.21:58125 - 36932 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000058778s
	[INFO] 10.244.0.21:52203 - 30988 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000048613s
	[INFO] 10.244.0.21:58125 - 41162 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000070306s
	[INFO] 10.244.0.21:58125 - 44109 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000041674s
	[INFO] 10.244.0.21:52203 - 54518 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000037905s
	[INFO] 10.244.0.21:58125 - 29963 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000036407s
	[INFO] 10.244.0.21:52203 - 39661 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000036862s
	[INFO] 10.244.0.21:58125 - 8335 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000047619s
	[INFO] 10.244.0.21:52203 - 54349 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00003886s
	[INFO] 10.244.0.21:52203 - 54091 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000068899s
	
	
	==> describe nodes <==
	Name:               addons-084503
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-084503
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fdd33bebc6743cfd1c61ec7fe066add478610a92
	                    minikube.k8s.io/name=addons-084503
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_13T18_21_33_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-084503
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Sep 2024 18:21:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-084503
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Sep 2024 18:34:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Sep 2024 18:34:09 +0000   Fri, 13 Sep 2024 18:21:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Sep 2024 18:34:09 +0000   Fri, 13 Sep 2024 18:21:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Sep 2024 18:34:09 +0000   Fri, 13 Sep 2024 18:21:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Sep 2024 18:34:09 +0000   Fri, 13 Sep 2024 18:21:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.228
	  Hostname:    addons-084503
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 5b1132d3377f4c57b9c1b187beafcaf0
	  System UUID:                5b1132d3-377f-4c57-b9c1-b187beafcaf0
	  Boot ID:                    3e1920e6-15fc-417e-9454-69ac7b5ec5e8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m15s
	  default                     hello-world-app-55bf9c44b4-2b7dl         0 (0%)        0 (0%)      0 (0%)           0 (0%)         15s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  gcp-auth                    gcp-auth-89d5ffd79-8qcqm                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-7c65d6cfc9-b6b4x                 100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 etcd-addons-084503                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-084503             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-084503    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-cjzhz                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-084503             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m (x4 over 12m)  kubelet          Node addons-084503 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x4 over 12m)  kubelet          Node addons-084503 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x4 over 12m)  kubelet          Node addons-084503 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node addons-084503 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node addons-084503 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet          Node addons-084503 status is now: NodeHasSufficientPID
	  Normal  NodeReady                12m                kubelet          Node addons-084503 status is now: NodeReady
	  Normal  RegisteredNode           12m                node-controller  Node addons-084503 event: Registered Node addons-084503 in Controller
	
	
	==> dmesg <==
	[  +5.824207] kauditd_printk_skb: 15 callbacks suppressed
	[  +7.946286] kauditd_printk_skb: 28 callbacks suppressed
	[  +6.191430] kauditd_printk_skb: 10 callbacks suppressed
	[  +5.087458] kauditd_printk_skb: 39 callbacks suppressed
	[Sep13 18:23] kauditd_printk_skb: 33 callbacks suppressed
	[  +5.683234] kauditd_printk_skb: 23 callbacks suppressed
	[ +13.441076] kauditd_printk_skb: 28 callbacks suppressed
	[ +34.823113] kauditd_printk_skb: 28 callbacks suppressed
	[Sep13 18:24] kauditd_printk_skb: 40 callbacks suppressed
	[  +7.494709] kauditd_printk_skb: 9 callbacks suppressed
	[ +15.245280] kauditd_printk_skb: 28 callbacks suppressed
	[  +6.875668] kauditd_printk_skb: 2 callbacks suppressed
	[Sep13 18:25] kauditd_printk_skb: 20 callbacks suppressed
	[ +20.271097] kauditd_printk_skb: 2 callbacks suppressed
	[Sep13 18:28] kauditd_printk_skb: 28 callbacks suppressed
	[Sep13 18:33] kauditd_printk_skb: 28 callbacks suppressed
	[  +7.510723] kauditd_printk_skb: 29 callbacks suppressed
	[  +6.018742] kauditd_printk_skb: 43 callbacks suppressed
	[  +6.065229] kauditd_printk_skb: 43 callbacks suppressed
	[  +8.179848] kauditd_printk_skb: 24 callbacks suppressed
	[  +7.679564] kauditd_printk_skb: 19 callbacks suppressed
	[  +8.162613] kauditd_printk_skb: 34 callbacks suppressed
	[Sep13 18:34] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.059617] kauditd_printk_skb: 14 callbacks suppressed
	[ +12.504189] kauditd_printk_skb: 15 callbacks suppressed
	
	
	==> etcd [8544c6b150de] <==
	{"level":"info","ts":"2024-09-13T18:22:34.674249Z","caller":"traceutil/trace.go:171","msg":"trace[1562570860] range","detail":"{range_begin:/registry/horizontalpodautoscalers/; range_end:/registry/horizontalpodautoscalers0; response_count:0; response_revision:1085; }","duration":"291.59021ms","start":"2024-09-13T18:22:34.382650Z","end":"2024-09-13T18:22:34.674241Z","steps":["trace[1562570860] 'count revisions from in-memory index tree'  (duration: 291.498724ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-13T18:22:43.586620Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"293.63133ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/\" range_end:\"/registry/serviceaccounts0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-09-13T18:22:43.586670Z","caller":"traceutil/trace.go:171","msg":"trace[2074201051] range","detail":"{range_begin:/registry/serviceaccounts/; range_end:/registry/serviceaccounts0; response_count:0; response_revision:1132; }","duration":"293.692953ms","start":"2024-09-13T18:22:43.292967Z","end":"2024-09-13T18:22:43.586660Z","steps":["trace[2074201051] 'count revisions from in-memory index tree'  (duration: 293.545306ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-13T18:22:43.586847Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"237.357119ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-13T18:22:43.586869Z","caller":"traceutil/trace.go:171","msg":"trace[272358548] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1132; }","duration":"237.379833ms","start":"2024-09-13T18:22:43.349482Z","end":"2024-09-13T18:22:43.586862Z","steps":["trace[272358548] 'range keys from in-memory index tree'  (duration: 237.351842ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-13T18:23:02.351162Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"280.51691ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-13T18:23:02.351244Z","caller":"traceutil/trace.go:171","msg":"trace[1985264101] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1214; }","duration":"280.663514ms","start":"2024-09-13T18:23:02.070570Z","end":"2024-09-13T18:23:02.351233Z","steps":["trace[1985264101] 'range keys from in-memory index tree'  (duration: 280.448326ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-13T18:23:02.351423Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"204.678265ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-13T18:23:02.351477Z","caller":"traceutil/trace.go:171","msg":"trace[1046144001] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1214; }","duration":"204.709543ms","start":"2024-09-13T18:23:02.146732Z","end":"2024-09-13T18:23:02.351442Z","steps":["trace[1046144001] 'range keys from in-memory index tree'  (duration: 204.634034ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-13T18:24:47.164632Z","caller":"traceutil/trace.go:171","msg":"trace[514420662] transaction","detail":"{read_only:false; response_revision:1546; number_of_response:1; }","duration":"251.386466ms","start":"2024-09-13T18:24:46.913213Z","end":"2024-09-13T18:24:47.164599Z","steps":["trace[514420662] 'process raft request'  (duration: 251.250262ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-13T18:24:47.165309Z","caller":"traceutil/trace.go:171","msg":"trace[926519883] linearizableReadLoop","detail":"{readStateIndex:1606; appliedIndex:1605; }","duration":"177.347222ms","start":"2024-09-13T18:24:46.987941Z","end":"2024-09-13T18:24:47.165288Z","steps":["trace[926519883] 'read index received'  (duration: 177.162269ms)","trace[926519883] 'applied index is now lower than readState.Index'  (duration: 184.595µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-13T18:24:47.166409Z","caller":"traceutil/trace.go:171","msg":"trace[599459274] transaction","detail":"{read_only:false; response_revision:1547; number_of_response:1; }","duration":"229.626039ms","start":"2024-09-13T18:24:46.936769Z","end":"2024-09-13T18:24:47.166395Z","steps":["trace[599459274] 'process raft request'  (duration: 228.464204ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-13T18:31:28.686606Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1856}
	{"level":"info","ts":"2024-09-13T18:31:28.789511Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1856,"took":"101.73203ms","hash":2068719851,"current-db-size-bytes":8904704,"current-db-size":"8.9 MB","current-db-size-in-use-bytes":4947968,"current-db-size-in-use":"4.9 MB"}
	{"level":"info","ts":"2024-09-13T18:31:28.789582Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2068719851,"revision":1856,"compact-revision":-1}
	{"level":"warn","ts":"2024-09-13T18:33:32.014182Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"341.276351ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-13T18:33:32.014386Z","caller":"traceutil/trace.go:171","msg":"trace[528809165] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2647; }","duration":"341.510133ms","start":"2024-09-13T18:33:31.672800Z","end":"2024-09-13T18:33:32.014311Z","steps":["trace[528809165] 'range keys from in-memory index tree'  (duration: 341.136112ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-13T18:33:32.014435Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-13T18:33:31.672761Z","time spent":"341.660701ms","remote":"127.0.0.1:59852","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-09-13T18:33:32.014560Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"315.957228ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.39.228\" ","response":"range_response_count:1 size:135"}
	{"level":"info","ts":"2024-09-13T18:33:32.014597Z","caller":"traceutil/trace.go:171","msg":"trace[1676309359] range","detail":"{range_begin:/registry/masterleases/192.168.39.228; range_end:; response_count:1; response_revision:2647; }","duration":"316.001211ms","start":"2024-09-13T18:33:31.698588Z","end":"2024-09-13T18:33:32.014589Z","steps":["trace[1676309359] 'range keys from in-memory index tree'  (duration: 315.817536ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-13T18:33:32.014622Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-13T18:33:31.698555Z","time spent":"316.061972ms","remote":"127.0.0.1:59920","response type":"/etcdserverpb.KV/Range","request count":0,"request size":39,"response count":1,"response size":158,"request content":"key:\"/registry/masterleases/192.168.39.228\" "}
	{"level":"warn","ts":"2024-09-13T18:33:32.014656Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"111.18448ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/default/hpvc-restore\" ","response":"range_response_count:1 size:982"}
	{"level":"info","ts":"2024-09-13T18:33:32.014675Z","caller":"traceutil/trace.go:171","msg":"trace[938472997] range","detail":"{range_begin:/registry/persistentvolumeclaims/default/hpvc-restore; range_end:; response_count:1; response_revision:2647; }","duration":"111.203859ms","start":"2024-09-13T18:33:31.903465Z","end":"2024-09-13T18:33:32.014669Z","steps":["trace[938472997] 'range keys from in-memory index tree'  (duration: 111.108486ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-13T18:33:32.014839Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"190.711044ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/local-path-storage\" ","response":"range_response_count:1 size:621"}
	{"level":"info","ts":"2024-09-13T18:33:32.014858Z","caller":"traceutil/trace.go:171","msg":"trace[1128571286] range","detail":"{range_begin:/registry/namespaces/local-path-storage; range_end:; response_count:1; response_revision:2647; }","duration":"190.731408ms","start":"2024-09-13T18:33:31.824120Z","end":"2024-09-13T18:33:32.014851Z","steps":["trace[1128571286] 'range keys from in-memory index tree'  (duration: 190.630555ms)"],"step_count":1}
	
	
	==> gcp-auth [cad75b34f260] <==
	2024/09/13 18:25:06 Ready to write response ...
	2024/09/13 18:25:06 Ready to marshal response ...
	2024/09/13 18:25:06 Ready to write response ...
	2024/09/13 18:33:11 Ready to marshal response ...
	2024/09/13 18:33:11 Ready to write response ...
	2024/09/13 18:33:19 Ready to marshal response ...
	2024/09/13 18:33:19 Ready to write response ...
	2024/09/13 18:33:20 Ready to marshal response ...
	2024/09/13 18:33:20 Ready to write response ...
	2024/09/13 18:33:20 Ready to marshal response ...
	2024/09/13 18:33:20 Ready to write response ...
	2024/09/13 18:33:28 Ready to marshal response ...
	2024/09/13 18:33:28 Ready to write response ...
	2024/09/13 18:33:28 Ready to marshal response ...
	2024/09/13 18:33:28 Ready to write response ...
	2024/09/13 18:33:28 Ready to marshal response ...
	2024/09/13 18:33:28 Ready to write response ...
	2024/09/13 18:33:31 Ready to marshal response ...
	2024/09/13 18:33:31 Ready to write response ...
	2024/09/13 18:33:40 Ready to marshal response ...
	2024/09/13 18:33:40 Ready to write response ...
	2024/09/13 18:33:56 Ready to marshal response ...
	2024/09/13 18:33:56 Ready to write response ...
	2024/09/13 18:34:06 Ready to marshal response ...
	2024/09/13 18:34:06 Ready to write response ...
	
	
	==> kernel <==
	 18:34:21 up 13 min,  0 users,  load average: 1.61, 1.18, 0.78
	Linux addons-084503 5.10.207 #1 SMP Thu Sep 12 19:03:33 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [cfca34a798af] <==
	W0913 18:24:58.394994       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0913 18:24:58.587568       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0913 18:24:58.937214       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0913 18:33:20.999991       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0913 18:33:21.400099       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0913 18:33:28.453265       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.104.122.200"}
	E0913 18:33:47.300476       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0913 18:33:52.784048       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0913 18:33:53.829835       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0913 18:33:56.108996       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0913 18:33:56.109058       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0913 18:33:56.137716       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0913 18:33:56.137903       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0913 18:33:56.149069       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0913 18:33:56.149131       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0913 18:33:56.249253       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0913 18:33:56.249289       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0913 18:33:56.312490       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0913 18:33:56.312604       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0913 18:33:56.814780       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0913 18:33:57.030217       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.102.113.173"}
	W0913 18:33:57.250527       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0913 18:33:57.313087       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0913 18:33:57.318558       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0913 18:34:06.528945       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.100.181.173"}
	
	
	==> kube-controller-manager [616152cb3ada] <==
	I0913 18:34:06.397584       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="37.77083ms"
	I0913 18:34:06.427290       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="29.409444ms"
	I0913 18:34:06.427551       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="181.353µs"
	I0913 18:34:06.427761       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="79.905µs"
	I0913 18:34:07.377154       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0913 18:34:07.377238       1 shared_informer.go:320] Caches are synced for resource quota
	I0913 18:34:07.483386       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0913 18:34:07.483433       1 shared_informer.go:320] Caches are synced for garbage collector
	I0913 18:34:08.731970       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0913 18:34:08.753792       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="8.89µs"
	I0913 18:34:08.763683       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	I0913 18:34:09.041437       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-084503"
	I0913 18:34:09.110670       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="9.500366ms"
	I0913 18:34:09.111304       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="45.688µs"
	W0913 18:34:12.288558       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 18:34:12.288626       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 18:34:12.714816       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 18:34:12.714868       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 18:34:13.317544       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 18:34:13.317685       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 18:34:16.084307       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 18:34:16.084421       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0913 18:34:18.780934       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="ingress-nginx"
	I0913 18:34:19.779414       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="local-path-storage"
	I0913 18:34:20.224018       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="7.54µs"
	
	
	==> kube-proxy [cf5e17f4d8e9] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0913 18:21:40.174092       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0913 18:21:40.186493       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.228"]
	E0913 18:21:40.186579       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0913 18:21:40.519217       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0913 18:21:40.519269       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0913 18:21:40.519295       1 server_linux.go:169] "Using iptables Proxier"
	I0913 18:21:40.522383       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0913 18:21:40.522744       1 server.go:483] "Version info" version="v1.31.1"
	I0913 18:21:40.522769       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0913 18:21:40.529033       1 config.go:199] "Starting service config controller"
	I0913 18:21:40.529070       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0913 18:21:40.529105       1 config.go:105] "Starting endpoint slice config controller"
	I0913 18:21:40.529149       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0913 18:21:40.540702       1 config.go:328] "Starting node config controller"
	I0913 18:21:40.540737       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0913 18:21:40.629484       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0913 18:21:40.629548       1 shared_informer.go:320] Caches are synced for service config
	I0913 18:21:40.644639       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [9e1012c72a6e] <==
	W0913 18:21:29.937701       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0913 18:21:29.937943       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 18:21:29.937790       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0913 18:21:29.938646       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0913 18:21:29.938684       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0913 18:21:29.938650       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 18:21:29.938603       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0913 18:21:29.939799       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0913 18:21:29.942211       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0913 18:21:29.942249       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0913 18:21:30.854247       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0913 18:21:30.854409       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 18:21:30.905084       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0913 18:21:30.905135       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 18:21:30.995291       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0913 18:21:30.995378       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0913 18:21:31.014707       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0913 18:21:31.014755       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0913 18:21:31.041186       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0913 18:21:31.041233       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 18:21:31.064866       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0913 18:21:31.064912       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0913 18:21:31.243819       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0913 18:21:31.243920       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0913 18:21:33.916741       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 13 18:34:12 addons-084503 kubelet[2047]: I0913 18:34:12.191813    2047 reconciler_common.go:288] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/958664f5-d249-4bf6-a3cc-74e5f423aa08-webhook-cert\") on node \"addons-084503\" DevicePath \"\""
	Sep 13 18:34:12 addons-084503 kubelet[2047]: I0913 18:34:12.563934    2047 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="958664f5-d249-4bf6-a3cc-74e5f423aa08" path="/var/lib/kubelet/pods/958664f5-d249-4bf6-a3cc-74e5f423aa08/volumes"
	Sep 13 18:34:14 addons-084503 kubelet[2047]: E0913 18:34:14.560969    2047 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox\\\"\"" pod="default/registry-test" podUID="f9395e43-3799-44db-9f9f-45ffa99dc41d"
	Sep 13 18:34:17 addons-084503 kubelet[2047]: E0913 18:34:17.556502    2047 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="7f83db3a-4f8f-4ce9-9f3e-3cfcbbd04083"
	Sep 13 18:34:19 addons-084503 kubelet[2047]: I0913 18:34:19.948548    2047 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/f9395e43-3799-44db-9f9f-45ffa99dc41d-gcp-creds\") pod \"f9395e43-3799-44db-9f9f-45ffa99dc41d\" (UID: \"f9395e43-3799-44db-9f9f-45ffa99dc41d\") "
	Sep 13 18:34:19 addons-084503 kubelet[2047]: I0913 18:34:19.948623    2047 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjhmx\" (UniqueName: \"kubernetes.io/projected/f9395e43-3799-44db-9f9f-45ffa99dc41d-kube-api-access-sjhmx\") pod \"f9395e43-3799-44db-9f9f-45ffa99dc41d\" (UID: \"f9395e43-3799-44db-9f9f-45ffa99dc41d\") "
	Sep 13 18:34:19 addons-084503 kubelet[2047]: I0913 18:34:19.948883    2047 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9395e43-3799-44db-9f9f-45ffa99dc41d-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "f9395e43-3799-44db-9f9f-45ffa99dc41d" (UID: "f9395e43-3799-44db-9f9f-45ffa99dc41d"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 13 18:34:19 addons-084503 kubelet[2047]: I0913 18:34:19.951525    2047 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9395e43-3799-44db-9f9f-45ffa99dc41d-kube-api-access-sjhmx" (OuterVolumeSpecName: "kube-api-access-sjhmx") pod "f9395e43-3799-44db-9f9f-45ffa99dc41d" (UID: "f9395e43-3799-44db-9f9f-45ffa99dc41d"). InnerVolumeSpecName "kube-api-access-sjhmx". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 13 18:34:20 addons-084503 kubelet[2047]: I0913 18:34:20.049155    2047 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-sjhmx\" (UniqueName: \"kubernetes.io/projected/f9395e43-3799-44db-9f9f-45ffa99dc41d-kube-api-access-sjhmx\") on node \"addons-084503\" DevicePath \"\""
	Sep 13 18:34:20 addons-084503 kubelet[2047]: I0913 18:34:20.049194    2047 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/f9395e43-3799-44db-9f9f-45ffa99dc41d-gcp-creds\") on node \"addons-084503\" DevicePath \"\""
	Sep 13 18:34:20 addons-084503 kubelet[2047]: I0913 18:34:20.572612    2047 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9395e43-3799-44db-9f9f-45ffa99dc41d" path="/var/lib/kubelet/pods/f9395e43-3799-44db-9f9f-45ffa99dc41d/volumes"
	Sep 13 18:34:20 addons-084503 kubelet[2047]: I0913 18:34:20.655844    2047 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6c52\" (UniqueName: \"kubernetes.io/projected/8c735e84-f4ab-4bf1-aad6-c5a4d187b69d-kube-api-access-q6c52\") pod \"8c735e84-f4ab-4bf1-aad6-c5a4d187b69d\" (UID: \"8c735e84-f4ab-4bf1-aad6-c5a4d187b69d\") "
	Sep 13 18:34:20 addons-084503 kubelet[2047]: I0913 18:34:20.658610    2047 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c735e84-f4ab-4bf1-aad6-c5a4d187b69d-kube-api-access-q6c52" (OuterVolumeSpecName: "kube-api-access-q6c52") pod "8c735e84-f4ab-4bf1-aad6-c5a4d187b69d" (UID: "8c735e84-f4ab-4bf1-aad6-c5a4d187b69d"). InnerVolumeSpecName "kube-api-access-q6c52". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 13 18:34:20 addons-084503 kubelet[2047]: I0913 18:34:20.756418    2047 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xhwx\" (UniqueName: \"kubernetes.io/projected/52fdf99f-a086-42c5-88fc-da0c47c197d1-kube-api-access-8xhwx\") pod \"52fdf99f-a086-42c5-88fc-da0c47c197d1\" (UID: \"52fdf99f-a086-42c5-88fc-da0c47c197d1\") "
	Sep 13 18:34:20 addons-084503 kubelet[2047]: I0913 18:34:20.756858    2047 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-q6c52\" (UniqueName: \"kubernetes.io/projected/8c735e84-f4ab-4bf1-aad6-c5a4d187b69d-kube-api-access-q6c52\") on node \"addons-084503\" DevicePath \"\""
	Sep 13 18:34:20 addons-084503 kubelet[2047]: I0913 18:34:20.758395    2047 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52fdf99f-a086-42c5-88fc-da0c47c197d1-kube-api-access-8xhwx" (OuterVolumeSpecName: "kube-api-access-8xhwx") pod "52fdf99f-a086-42c5-88fc-da0c47c197d1" (UID: "52fdf99f-a086-42c5-88fc-da0c47c197d1"). InnerVolumeSpecName "kube-api-access-8xhwx". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 13 18:34:20 addons-084503 kubelet[2047]: I0913 18:34:20.858181    2047 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-8xhwx\" (UniqueName: \"kubernetes.io/projected/52fdf99f-a086-42c5-88fc-da0c47c197d1-kube-api-access-8xhwx\") on node \"addons-084503\" DevicePath \"\""
	Sep 13 18:34:21 addons-084503 kubelet[2047]: I0913 18:34:21.286515    2047 scope.go:117] "RemoveContainer" containerID="b7faa0820fd4effa343d5b37bf9ad5675816b1448134159241e8b63e3dfa4c25"
	Sep 13 18:34:21 addons-084503 kubelet[2047]: I0913 18:34:21.342657    2047 scope.go:117] "RemoveContainer" containerID="b7faa0820fd4effa343d5b37bf9ad5675816b1448134159241e8b63e3dfa4c25"
	Sep 13 18:34:21 addons-084503 kubelet[2047]: E0913 18:34:21.344077    2047 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: b7faa0820fd4effa343d5b37bf9ad5675816b1448134159241e8b63e3dfa4c25" containerID="b7faa0820fd4effa343d5b37bf9ad5675816b1448134159241e8b63e3dfa4c25"
	Sep 13 18:34:21 addons-084503 kubelet[2047]: I0913 18:34:21.344134    2047 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"b7faa0820fd4effa343d5b37bf9ad5675816b1448134159241e8b63e3dfa4c25"} err="failed to get container status \"b7faa0820fd4effa343d5b37bf9ad5675816b1448134159241e8b63e3dfa4c25\": rpc error: code = Unknown desc = Error response from daemon: No such container: b7faa0820fd4effa343d5b37bf9ad5675816b1448134159241e8b63e3dfa4c25"
	Sep 13 18:34:21 addons-084503 kubelet[2047]: I0913 18:34:21.344158    2047 scope.go:117] "RemoveContainer" containerID="b7406e04890db3bcf1f5cc44925a0c5fa69b981e966479a2744a75b1909aff74"
	Sep 13 18:34:21 addons-084503 kubelet[2047]: I0913 18:34:21.371632    2047 scope.go:117] "RemoveContainer" containerID="b7406e04890db3bcf1f5cc44925a0c5fa69b981e966479a2744a75b1909aff74"
	Sep 13 18:34:21 addons-084503 kubelet[2047]: E0913 18:34:21.372716    2047 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: b7406e04890db3bcf1f5cc44925a0c5fa69b981e966479a2744a75b1909aff74" containerID="b7406e04890db3bcf1f5cc44925a0c5fa69b981e966479a2744a75b1909aff74"
	Sep 13 18:34:21 addons-084503 kubelet[2047]: I0913 18:34:21.372766    2047 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"b7406e04890db3bcf1f5cc44925a0c5fa69b981e966479a2744a75b1909aff74"} err="failed to get container status \"b7406e04890db3bcf1f5cc44925a0c5fa69b981e966479a2744a75b1909aff74\": rpc error: code = Unknown desc = Error response from daemon: No such container: b7406e04890db3bcf1f5cc44925a0c5fa69b981e966479a2744a75b1909aff74"
	
	
	==> storage-provisioner [1858626b43b4] <==
	I0913 18:21:47.014140       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0913 18:21:47.046059       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0913 18:21:47.046124       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0913 18:21:47.067733       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0913 18:21:47.067904       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-084503_68c038a1-feb2-49d7-b02a-e2a9a93411c8!
	I0913 18:21:47.070084       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ff5faf3d-0af3-42ca-891f-aedfc7346e9f", APIVersion:"v1", ResourceVersion:"675", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-084503_68c038a1-feb2-49d7-b02a-e2a9a93411c8 became leader
	I0913 18:21:47.170446       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-084503_68c038a1-feb2-49d7-b02a-e2a9a93411c8!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-084503 -n addons-084503
helpers_test.go:261: (dbg) Run:  kubectl --context addons-084503 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-084503 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-084503 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-084503/192.168.39.228
	Start Time:       Fri, 13 Sep 2024 18:25:06 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kf2l4 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-kf2l4:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m16s                   default-scheduler  Successfully assigned default/busybox to addons-084503
	  Normal   Pulling    7m55s (x4 over 9m16s)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m55s (x4 over 9m15s)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m55s (x4 over 9m15s)   kubelet            Error: ErrImagePull
	  Warning  Failed     7m29s (x6 over 9m15s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m11s (x20 over 9m15s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (73.54s)

                                                
                                    

Test pass (308/340)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 7.29
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.1/json-events 5.03
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.06
18 TestDownloadOnly/v1.31.1/DeleteAll 0.13
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.58
22 TestOffline 88.59
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 215.88
29 TestAddons/serial/Volcano 41.96
31 TestAddons/serial/GCPAuth/Namespaces 0.12
34 TestAddons/parallel/Ingress 19.28
35 TestAddons/parallel/InspektorGadget 11.7
36 TestAddons/parallel/MetricsServer 6.9
38 TestAddons/parallel/CSI 47.93
39 TestAddons/parallel/Headlamp 18.91
40 TestAddons/parallel/CloudSpanner 6.48
41 TestAddons/parallel/LocalPath 54.31
42 TestAddons/parallel/NvidiaDevicePlugin 5.47
43 TestAddons/parallel/Yakd 11.92
44 TestAddons/StoppedEnableDisable 8.56
45 TestCertOptions 64.33
46 TestCertExpiration 285.04
47 TestDockerFlags 74
48 TestForceSystemdFlag 55.28
49 TestForceSystemdEnv 61.45
51 TestKVMDriverInstallOrUpdate 6.02
55 TestErrorSpam/setup 49.72
56 TestErrorSpam/start 0.34
57 TestErrorSpam/status 0.75
58 TestErrorSpam/pause 1.19
59 TestErrorSpam/unpause 1.4
60 TestErrorSpam/stop 15
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 95.5
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 40.09
67 TestFunctional/serial/KubeContext 0.04
68 TestFunctional/serial/KubectlGetPods 0.15
71 TestFunctional/serial/CacheCmd/cache/add_remote 2.43
72 TestFunctional/serial/CacheCmd/cache/add_local 1.31
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
74 TestFunctional/serial/CacheCmd/cache/list 0.05
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
76 TestFunctional/serial/CacheCmd/cache/cache_reload 1.15
77 TestFunctional/serial/CacheCmd/cache/delete 0.09
78 TestFunctional/serial/MinikubeKubectlCmd 0.11
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
80 TestFunctional/serial/ExtraConfig 40.23
81 TestFunctional/serial/ComponentHealth 0.07
82 TestFunctional/serial/LogsCmd 0.93
83 TestFunctional/serial/LogsFileCmd 0.98
84 TestFunctional/serial/InvalidService 3.86
86 TestFunctional/parallel/ConfigCmd 0.34
87 TestFunctional/parallel/DashboardCmd 19.59
88 TestFunctional/parallel/DryRun 0.39
89 TestFunctional/parallel/InternationalLanguage 0.15
90 TestFunctional/parallel/StatusCmd 0.84
94 TestFunctional/parallel/ServiceCmdConnect 8.81
95 TestFunctional/parallel/AddonsCmd 0.16
96 TestFunctional/parallel/PersistentVolumeClaim 44.98
98 TestFunctional/parallel/SSHCmd 0.45
99 TestFunctional/parallel/CpCmd 1.4
100 TestFunctional/parallel/MySQL 35.02
101 TestFunctional/parallel/FileSync 0.27
102 TestFunctional/parallel/CertSync 1.44
106 TestFunctional/parallel/NodeLabels 0.06
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.2
110 TestFunctional/parallel/License 0.23
111 TestFunctional/parallel/ServiceCmd/DeployApp 10.57
112 TestFunctional/parallel/DockerEnv/bash 0.92
113 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
114 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
115 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
125 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
126 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
127 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
128 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
129 TestFunctional/parallel/ImageCommands/ImageBuild 3.54
130 TestFunctional/parallel/ImageCommands/Setup 1.62
131 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.11
132 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.74
133 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.51
134 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.56
135 TestFunctional/parallel/ImageCommands/ImageRemove 0.42
136 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.79
137 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.48
138 TestFunctional/parallel/Version/short 0.05
139 TestFunctional/parallel/Version/components 0.78
140 TestFunctional/parallel/ProfileCmd/profile_not_create 0.31
141 TestFunctional/parallel/ProfileCmd/profile_list 0.65
142 TestFunctional/parallel/ServiceCmd/List 0.34
143 TestFunctional/parallel/ProfileCmd/profile_json_output 0.57
144 TestFunctional/parallel/ServiceCmd/JSONOutput 0.32
145 TestFunctional/parallel/MountCmd/any-port 21.56
146 TestFunctional/parallel/ServiceCmd/HTTPS 0.36
147 TestFunctional/parallel/ServiceCmd/Format 0.31
148 TestFunctional/parallel/ServiceCmd/URL 0.33
149 TestFunctional/parallel/MountCmd/specific-port 1.83
150 TestFunctional/parallel/MountCmd/VerifyCleanup 1.45
151 TestFunctional/delete_echo-server_images 0.04
152 TestFunctional/delete_my-image_image 0.02
153 TestFunctional/delete_minikube_cached_images 0.02
154 TestGvisorAddon 185.2
157 TestMultiControlPlane/serial/StartCluster 215.88
158 TestMultiControlPlane/serial/DeployApp 5.45
159 TestMultiControlPlane/serial/PingHostFromPods 1.32
160 TestMultiControlPlane/serial/AddWorkerNode 65.52
161 TestMultiControlPlane/serial/NodeLabels 0.07
162 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.55
163 TestMultiControlPlane/serial/CopyFile 12.99
164 TestMultiControlPlane/serial/StopSecondaryNode 13.93
165 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.39
166 TestMultiControlPlane/serial/RestartSecondaryNode 47.51
167 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.54
168 TestMultiControlPlane/serial/RestartClusterKeepsNodes 260.81
169 TestMultiControlPlane/serial/DeleteSecondaryNode 7.1
170 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.39
171 TestMultiControlPlane/serial/StopCluster 38.34
172 TestMultiControlPlane/serial/RestartCluster 162.62
173 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.38
174 TestMultiControlPlane/serial/AddSecondaryNode 81.56
175 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.54
178 TestImageBuild/serial/Setup 51.5
179 TestImageBuild/serial/NormalBuild 2.32
180 TestImageBuild/serial/BuildWithBuildArg 1.14
181 TestImageBuild/serial/BuildWithDockerIgnore 1.05
182 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.86
186 TestJSONOutput/start/Command 91.29
187 TestJSONOutput/start/Audit 0
189 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Command 0.56
193 TestJSONOutput/pause/Audit 0
195 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Command 0.53
199 TestJSONOutput/unpause/Audit 0
201 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
204 TestJSONOutput/stop/Command 12.66
205 TestJSONOutput/stop/Audit 0
207 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
208 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
209 TestErrorJSONOutput 0.19
214 TestMainNoArgs 0.04
215 TestMinikubeProfile 104.39
218 TestMountStart/serial/StartWithMountFirst 32.54
219 TestMountStart/serial/VerifyMountFirst 0.38
220 TestMountStart/serial/StartWithMountSecond 30.81
221 TestMountStart/serial/VerifyMountSecond 0.38
222 TestMountStart/serial/DeleteFirst 0.7
223 TestMountStart/serial/VerifyMountPostDelete 0.38
224 TestMountStart/serial/Stop 3.28
225 TestMountStart/serial/RestartStopped 26.22
226 TestMountStart/serial/VerifyMountPostStop 0.37
229 TestMultiNode/serial/FreshStart2Nodes 125.54
230 TestMultiNode/serial/DeployApp2Nodes 4.27
231 TestMultiNode/serial/PingHostFrom2Pods 0.84
232 TestMultiNode/serial/AddNode 61.96
233 TestMultiNode/serial/MultiNodeLabels 0.06
234 TestMultiNode/serial/ProfileList 0.22
235 TestMultiNode/serial/CopyFile 7.11
236 TestMultiNode/serial/StopNode 3.34
237 TestMultiNode/serial/StartAfterStop 42.17
238 TestMultiNode/serial/RestartKeepsNodes 174.08
239 TestMultiNode/serial/DeleteNode 2.35
240 TestMultiNode/serial/StopMultiNode 25.1
241 TestMultiNode/serial/RestartMultiNode 118.13
242 TestMultiNode/serial/ValidateNameConflict 52.54
247 TestPreload 184.28
249 TestScheduledStopUnix 119.44
250 TestSkaffold 134.15
253 TestRunningBinaryUpgrade 203.84
255 TestKubernetesUpgrade 190.03
268 TestStoppedBinaryUpgrade/Setup 0.6
269 TestStoppedBinaryUpgrade/Upgrade 319.28
271 TestPause/serial/Start 109.76
279 TestPause/serial/SecondStartNoReconfiguration 61.16
280 TestPause/serial/Pause 2.18
281 TestPause/serial/VerifyStatus 0.29
282 TestPause/serial/Unpause 0.61
283 TestPause/serial/PauseAgain 0.73
284 TestPause/serial/DeletePaused 1.05
285 TestPause/serial/VerifyDeletedResources 0.54
286 TestStoppedBinaryUpgrade/MinikubeLogs 2
288 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
289 TestNoKubernetes/serial/StartWithK8s 83.11
290 TestNoKubernetes/serial/StartWithStopK8s 45.09
291 TestNetworkPlugins/group/auto/Start 63.96
292 TestNoKubernetes/serial/Start 56.52
293 TestNetworkPlugins/group/flannel/Start 113.9
294 TestNetworkPlugins/group/enable-default-cni/Start 123.44
295 TestNoKubernetes/serial/VerifyK8sNotRunning 0.22
296 TestNoKubernetes/serial/ProfileList 0.83
297 TestNoKubernetes/serial/Stop 2.3
298 TestNoKubernetes/serial/StartNoArgs 72.6
299 TestNetworkPlugins/group/auto/KubeletFlags 0.2
300 TestNetworkPlugins/group/auto/NetCatPod 10.23
301 TestNetworkPlugins/group/auto/DNS 0.17
302 TestNetworkPlugins/group/auto/Localhost 0.13
303 TestNetworkPlugins/group/auto/HairPin 0.12
304 TestNetworkPlugins/group/bridge/Start 101.58
305 TestNetworkPlugins/group/flannel/ControllerPod 6.01
306 TestNetworkPlugins/group/flannel/KubeletFlags 0.27
307 TestNetworkPlugins/group/flannel/NetCatPod 11.26
308 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.24
309 TestNetworkPlugins/group/kubenet/Start 122.58
310 TestNetworkPlugins/group/flannel/DNS 0.22
311 TestNetworkPlugins/group/flannel/Localhost 0.21
312 TestNetworkPlugins/group/flannel/HairPin 0.16
313 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.2
314 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.25
315 TestNetworkPlugins/group/enable-default-cni/DNS 21.61
316 TestNetworkPlugins/group/calico/Start 126.34
317 TestNetworkPlugins/group/enable-default-cni/Localhost 0.17
318 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
319 TestNetworkPlugins/group/bridge/KubeletFlags 0.24
320 TestNetworkPlugins/group/bridge/NetCatPod 12.33
321 TestNetworkPlugins/group/kindnet/Start 101.26
322 TestNetworkPlugins/group/bridge/DNS 0.19
323 TestNetworkPlugins/group/bridge/Localhost 0.14
324 TestNetworkPlugins/group/bridge/HairPin 0.15
325 TestNetworkPlugins/group/custom-flannel/Start 89.62
326 TestNetworkPlugins/group/kubenet/KubeletFlags 0.26
327 TestNetworkPlugins/group/kubenet/NetCatPod 11.31
328 TestNetworkPlugins/group/kubenet/DNS 0.19
329 TestNetworkPlugins/group/kubenet/Localhost 0.14
330 TestNetworkPlugins/group/kubenet/HairPin 0.16
331 TestNetworkPlugins/group/calico/ControllerPod 6.01
332 TestNetworkPlugins/group/false/Start 101.78
333 TestNetworkPlugins/group/calico/KubeletFlags 0.24
334 TestNetworkPlugins/group/calico/NetCatPod 12.32
335 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
336 TestNetworkPlugins/group/calico/DNS 0.19
337 TestNetworkPlugins/group/calico/Localhost 0.16
338 TestNetworkPlugins/group/calico/HairPin 0.16
339 TestNetworkPlugins/group/kindnet/KubeletFlags 0.24
340 TestNetworkPlugins/group/kindnet/NetCatPod 10.28
341 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.24
342 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.26
343 TestNetworkPlugins/group/kindnet/DNS 0.22
344 TestNetworkPlugins/group/kindnet/Localhost 0.2
345 TestNetworkPlugins/group/kindnet/HairPin 0.17
347 TestStartStop/group/old-k8s-version/serial/FirstStart 170.82
348 TestNetworkPlugins/group/custom-flannel/DNS 0.25
349 TestNetworkPlugins/group/custom-flannel/Localhost 0.18
350 TestNetworkPlugins/group/custom-flannel/HairPin 0.21
352 TestStartStop/group/no-preload/serial/FirstStart 93.43
354 TestStartStop/group/embed-certs/serial/FirstStart 102.03
355 TestNetworkPlugins/group/false/KubeletFlags 0.27
356 TestNetworkPlugins/group/false/NetCatPod 11.32
357 TestNetworkPlugins/group/false/DNS 0.21
358 TestNetworkPlugins/group/false/Localhost 0.17
359 TestNetworkPlugins/group/false/HairPin 0.16
361 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 101.62
362 TestStartStop/group/no-preload/serial/DeployApp 10.35
363 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.14
364 TestStartStop/group/no-preload/serial/Stop 13.38
365 TestStartStop/group/embed-certs/serial/DeployApp 9.33
366 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
367 TestStartStop/group/no-preload/serial/SecondStart 304.09
368 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.41
369 TestStartStop/group/embed-certs/serial/Stop 13.39
370 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
371 TestStartStop/group/embed-certs/serial/SecondStart 316.59
372 TestStartStop/group/old-k8s-version/serial/DeployApp 9.62
373 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.35
374 TestStartStop/group/old-k8s-version/serial/Stop 13.42
375 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
376 TestStartStop/group/old-k8s-version/serial/SecondStart 456.73
377 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.31
378 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1
379 TestStartStop/group/default-k8s-diff-port/serial/Stop 13.35
380 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.25
381 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 300.8
382 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
383 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
384 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.23
385 TestStartStop/group/no-preload/serial/Pause 2.7
387 TestStartStop/group/newest-cni/serial/FirstStart 64.89
388 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 11.01
389 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
390 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.23
391 TestStartStop/group/embed-certs/serial/Pause 2.66
392 TestStartStop/group/newest-cni/serial/DeployApp 0
393 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.99
394 TestStartStop/group/newest-cni/serial/Stop 13.34
395 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
396 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
397 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
398 TestStartStop/group/newest-cni/serial/SecondStart 36.21
399 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.22
400 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.46
401 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
402 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
403 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
404 TestStartStop/group/newest-cni/serial/Pause 2.38
405 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
406 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
407 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.21
408 TestStartStop/group/old-k8s-version/serial/Pause 2.34
x
+
TestDownloadOnly/v1.20.0/json-events (7.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-601425 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-601425 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=kvm2 : (7.29319447s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (7.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-601425
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-601425: exit status 85 (57.609649ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-601425 | jenkins | v1.34.0 | 13 Sep 24 18:20 UTC |          |
	|         | -p download-only-601425        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/13 18:20:34
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0913 18:20:34.426535   11062 out.go:345] Setting OutFile to fd 1 ...
	I0913 18:20:34.426637   11062 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:20:34.426645   11062 out.go:358] Setting ErrFile to fd 2...
	I0913 18:20:34.426649   11062 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:20:34.426855   11062 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-3886/.minikube/bin
	W0913 18:20:34.426983   11062 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19636-3886/.minikube/config/config.json: open /home/jenkins/minikube-integration/19636-3886/.minikube/config/config.json: no such file or directory
	I0913 18:20:34.427543   11062 out.go:352] Setting JSON to true
	I0913 18:20:34.428424   11062 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":181,"bootTime":1726251453,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0913 18:20:34.428520   11062 start.go:139] virtualization: kvm guest
	I0913 18:20:34.431012   11062 out.go:97] [download-only-601425] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0913 18:20:34.431155   11062 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19636-3886/.minikube/cache/preloaded-tarball: no such file or directory
	I0913 18:20:34.431165   11062 notify.go:220] Checking for updates...
	I0913 18:20:34.432788   11062 out.go:169] MINIKUBE_LOCATION=19636
	I0913 18:20:34.434409   11062 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 18:20:34.435765   11062 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19636-3886/kubeconfig
	I0913 18:20:34.437184   11062 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-3886/.minikube
	I0913 18:20:34.438507   11062 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0913 18:20:34.440775   11062 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0913 18:20:34.441060   11062 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 18:20:34.541121   11062 out.go:97] Using the kvm2 driver based on user configuration
	I0913 18:20:34.541156   11062 start.go:297] selected driver: kvm2
	I0913 18:20:34.541162   11062 start.go:901] validating driver "kvm2" against <nil>
	I0913 18:20:34.541524   11062 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 18:20:34.541674   11062 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19636-3886/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0913 18:20:34.557379   11062 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0913 18:20:34.557423   11062 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0913 18:20:34.557964   11062 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0913 18:20:34.558127   11062 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0913 18:20:34.558156   11062 cni.go:84] Creating CNI manager for ""
	I0913 18:20:34.558201   11062 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0913 18:20:34.558250   11062 start.go:340] cluster config:
	{Name:download-only-601425 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-601425 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 18:20:34.558449   11062 iso.go:125] acquiring lock: {Name:mk12ab92f890170906f67f3ca706a4ea8b0bad2f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 18:20:34.560689   11062 out.go:97] Downloading VM boot image ...
	I0913 18:20:34.560733   11062 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19636-3886/.minikube/cache/iso/amd64/minikube-v1.34.0-1726156389-19616-amd64.iso
	I0913 18:20:37.612444   11062 out.go:97] Starting "download-only-601425" primary control-plane node in "download-only-601425" cluster
	I0913 18:20:37.612481   11062 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0913 18:20:37.638612   11062 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0913 18:20:37.638648   11062 cache.go:56] Caching tarball of preloaded images
	I0913 18:20:37.638809   11062 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0913 18:20:37.640712   11062 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0913 18:20:37.640739   11062 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0913 18:20:37.669596   11062 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /home/jenkins/minikube-integration/19636-3886/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-601425 host does not exist
	  To start a cluster, run: "minikube start -p download-only-601425"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-601425
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (5.03s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-504525 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-504525 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=kvm2 : (5.034334552s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (5.03s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-504525
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-504525: exit status 85 (55.976441ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-601425 | jenkins | v1.34.0 | 13 Sep 24 18:20 UTC |                     |
	|         | -p download-only-601425        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 13 Sep 24 18:20 UTC | 13 Sep 24 18:20 UTC |
	| delete  | -p download-only-601425        | download-only-601425 | jenkins | v1.34.0 | 13 Sep 24 18:20 UTC | 13 Sep 24 18:20 UTC |
	| start   | -o=json --download-only        | download-only-504525 | jenkins | v1.34.0 | 13 Sep 24 18:20 UTC |                     |
	|         | -p download-only-504525        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/13 18:20:42
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0913 18:20:42.037778   11270 out.go:345] Setting OutFile to fd 1 ...
	I0913 18:20:42.038058   11270 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:20:42.038068   11270 out.go:358] Setting ErrFile to fd 2...
	I0913 18:20:42.038075   11270 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:20:42.038254   11270 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-3886/.minikube/bin
	I0913 18:20:42.038868   11270 out.go:352] Setting JSON to true
	I0913 18:20:42.039703   11270 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":189,"bootTime":1726251453,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0913 18:20:42.039797   11270 start.go:139] virtualization: kvm guest
	I0913 18:20:42.041867   11270 out.go:97] [download-only-504525] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0913 18:20:42.041980   11270 notify.go:220] Checking for updates...
	I0913 18:20:42.043349   11270 out.go:169] MINIKUBE_LOCATION=19636
	I0913 18:20:42.044644   11270 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 18:20:42.045787   11270 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19636-3886/kubeconfig
	I0913 18:20:42.047006   11270 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-3886/.minikube
	I0913 18:20:42.048135   11270 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-504525 host does not exist
	  To start a cluster, run: "minikube start -p download-only-504525"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-504525
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.58s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-322824 --alsologtostderr --binary-mirror http://127.0.0.1:34697 --driver=kvm2 
helpers_test.go:175: Cleaning up "binary-mirror-322824" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-322824
--- PASS: TestBinaryMirror (0.58s)

                                                
                                    
x
+
TestOffline (88.59s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-701065 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-701065 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 : (1m27.575545365s)
helpers_test.go:175: Cleaning up "offline-docker-701065" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-701065
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-701065: (1.011764781s)
--- PASS: TestOffline (88.59s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-084503
addons_test.go:975: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-084503: exit status 85 (48.086125ms)

                                                
                                                
-- stdout --
	* Profile "addons-084503" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-084503"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-084503
addons_test.go:986: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-084503: exit status 85 (50.352075ms)

                                                
                                                
-- stdout --
	* Profile "addons-084503" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-084503"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (215.88s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-084503 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --addons=ingress --addons=ingress-dns
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-084503 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --addons=ingress --addons=ingress-dns: (3m35.878096791s)
--- PASS: TestAddons/Setup (215.88s)

                                                
                                    
x
+
TestAddons/serial/Volcano (41.96s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:851: volcano-controller stabilized in 20.185269ms
addons_test.go:843: volcano-admission stabilized in 20.215399ms
addons_test.go:835: volcano-scheduler stabilized in 20.348816ms
addons_test.go:857: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-gt6b2" [90a3f10f-1847-4316-aa20-69e0303f0ec1] Running
addons_test.go:857: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.013019679s
addons_test.go:861: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-cxgd4" [6f875188-ed31-41cf-8aa2-eae2f69d8e82] Running
addons_test.go:861: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 6.003597506s
addons_test.go:865: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-dt4l5" [08b23aa8-5792-4728-96af-53daafc4ec3f] Running
addons_test.go:865: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003551843s
addons_test.go:870: (dbg) Run:  kubectl --context addons-084503 delete -n volcano-system job volcano-admission-init
addons_test.go:876: (dbg) Run:  kubectl --context addons-084503 create -f testdata/vcjob.yaml
addons_test.go:884: (dbg) Run:  kubectl --context addons-084503 get vcjob -n my-volcano
addons_test.go:902: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [4d8ac1a3-2fec-465b-b459-5d77f544636a] Pending
helpers_test.go:344: "test-job-nginx-0" [4d8ac1a3-2fec-465b-b459-5d77f544636a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [4d8ac1a3-2fec-465b-b459-5d77f544636a] Running
addons_test.go:902: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 15.003977999s
addons_test.go:906: (dbg) Run:  out/minikube-linux-amd64 -p addons-084503 addons disable volcano --alsologtostderr -v=1
addons_test.go:906: (dbg) Done: out/minikube-linux-amd64 -p addons-084503 addons disable volcano --alsologtostderr -v=1: (10.532323149s)
--- PASS: TestAddons/serial/Volcano (41.96s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:594: (dbg) Run:  kubectl --context addons-084503 create ns new-namespace
addons_test.go:608: (dbg) Run:  kubectl --context addons-084503 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.28s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:205: (dbg) Run:  kubectl --context addons-084503 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:230: (dbg) Run:  kubectl --context addons-084503 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:243: (dbg) Run:  kubectl --context addons-084503 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [3c773b24-ef9e-4150-b150-91bfec088b79] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [3c773b24-ef9e-4150-b150-91bfec088b79] Running
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.004194823s
addons_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p addons-084503 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:284: (dbg) Run:  kubectl --context addons-084503 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p addons-084503 ip
addons_test.go:295: (dbg) Run:  nslookup hello-john.test 192.168.39.228
addons_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p addons-084503 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:304: (dbg) Done: out/minikube-linux-amd64 -p addons-084503 addons disable ingress-dns --alsologtostderr -v=1: (1.384272474s)
addons_test.go:309: (dbg) Run:  out/minikube-linux-amd64 -p addons-084503 addons disable ingress --alsologtostderr -v=1
addons_test.go:309: (dbg) Done: out/minikube-linux-amd64 -p addons-084503 addons disable ingress --alsologtostderr -v=1: (7.705230394s)
--- PASS: TestAddons/parallel/Ingress (19.28s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.7s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-wx4lz" [46e17108-57da-4940-a7d9-ee7f279a3d2b] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004262275s
addons_test.go:789: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-084503
addons_test.go:789: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-084503: (5.691665927s)
--- PASS: TestAddons/parallel/InspektorGadget (11.70s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.9s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:405: metrics-server stabilized in 4.302044ms
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-g9d9c" [b58dd703-5930-4049-89a0-e44fc582da9a] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.0032782s
addons_test.go:413: (dbg) Run:  kubectl --context addons-084503 top pods -n kube-system
addons_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p addons-084503 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.90s)

                                                
                                    
x
+
TestAddons/parallel/CSI (47.93s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:505: csi-hostpath-driver pods stabilized in 28.147764ms
addons_test.go:508: (dbg) Run:  kubectl --context addons-084503 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:513: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084503 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084503 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084503 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084503 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:518: (dbg) Run:  kubectl --context addons-084503 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:523: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [4df9a6ff-f5ff-466d-ade4-b460d8fe4ddf] Pending
helpers_test.go:344: "task-pv-pod" [4df9a6ff-f5ff-466d-ade4-b460d8fe4ddf] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [4df9a6ff-f5ff-466d-ade4-b460d8fe4ddf] Running
addons_test.go:523: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.010133246s
addons_test.go:528: (dbg) Run:  kubectl --context addons-084503 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:533: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-084503 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-084503 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:538: (dbg) Run:  kubectl --context addons-084503 delete pod task-pv-pod
addons_test.go:544: (dbg) Run:  kubectl --context addons-084503 delete pvc hpvc
addons_test.go:550: (dbg) Run:  kubectl --context addons-084503 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084503 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084503 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084503 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084503 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084503 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084503 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084503 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084503 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084503 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084503 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084503 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084503 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084503 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084503 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084503 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084503 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084503 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084503 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:560: (dbg) Run:  kubectl --context addons-084503 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [21cd7387-7ea0-4931-b974-4f7c79ac2f20] Pending
helpers_test.go:344: "task-pv-pod-restore" [21cd7387-7ea0-4931-b974-4f7c79ac2f20] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [21cd7387-7ea0-4931-b974-4f7c79ac2f20] Running
addons_test.go:565: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004199912s
addons_test.go:570: (dbg) Run:  kubectl --context addons-084503 delete pod task-pv-pod-restore
addons_test.go:574: (dbg) Run:  kubectl --context addons-084503 delete pvc hpvc-restore
addons_test.go:578: (dbg) Run:  kubectl --context addons-084503 delete volumesnapshot new-snapshot-demo
addons_test.go:582: (dbg) Run:  out/minikube-linux-amd64 -p addons-084503 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:582: (dbg) Done: out/minikube-linux-amd64 -p addons-084503 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.731882135s)
addons_test.go:586: (dbg) Run:  out/minikube-linux-amd64 -p addons-084503 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (47.93s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.91s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:768: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-084503 --alsologtostderr -v=1
addons_test.go:768: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-084503 --alsologtostderr -v=1: (1.083462159s)
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-kpzb9" [330ad1c1-3c8a-416f-8836-2e88136708f8] Pending
helpers_test.go:344: "headlamp-57fb76fcdb-kpzb9" [330ad1c1-3c8a-416f-8836-2e88136708f8] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-kpzb9" [330ad1c1-3c8a-416f-8836-2e88136708f8] Running
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.00442038s
addons_test.go:777: (dbg) Run:  out/minikube-linux-amd64 -p addons-084503 addons disable headlamp --alsologtostderr -v=1
addons_test.go:777: (dbg) Done: out/minikube-linux-amd64 -p addons-084503 addons disable headlamp --alsologtostderr -v=1: (5.820230082s)
--- PASS: TestAddons/parallel/Headlamp (18.91s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.48s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-q7r85" [d0e39e39-21b3-4aee-bf80-9c1650859dbe] Running
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004470171s
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-084503
--- PASS: TestAddons/parallel/CloudSpanner (6.48s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (54.31s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:920: (dbg) Run:  kubectl --context addons-084503 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:926: (dbg) Run:  kubectl --context addons-084503 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:930: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084503 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084503 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084503 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084503 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084503 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-084503 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [db2526e2-2a29-4db2-915a-72bc10aa33d8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [db2526e2-2a29-4db2-915a-72bc10aa33d8] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [db2526e2-2a29-4db2-915a-72bc10aa33d8] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.004517278s
addons_test.go:938: (dbg) Run:  kubectl --context addons-084503 get pvc test-pvc -o=json
addons_test.go:947: (dbg) Run:  out/minikube-linux-amd64 -p addons-084503 ssh "cat /opt/local-path-provisioner/pvc-a311ee20-76c9-43bb-aa4f-017e3c6d3a8c_default_test-pvc/file1"
addons_test.go:959: (dbg) Run:  kubectl --context addons-084503 delete pod test-local-path
addons_test.go:963: (dbg) Run:  kubectl --context addons-084503 delete pvc test-pvc
addons_test.go:967: (dbg) Run:  out/minikube-linux-amd64 -p addons-084503 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:967: (dbg) Done: out/minikube-linux-amd64 -p addons-084503 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.500378805s)
--- PASS: TestAddons/parallel/LocalPath (54.31s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.47s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-628zk" [c1f45629-5794-47b5-ad6f-1ba8de29946d] Running
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.006981036s
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-084503
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.47s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.92s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-jr4dh" [da57f275-5dbd-49bf-afa5-f890b4f1bd1b] Running
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003836008s
addons_test.go:1014: (dbg) Run:  out/minikube-linux-amd64 -p addons-084503 addons disable yakd --alsologtostderr -v=1
addons_test.go:1014: (dbg) Done: out/minikube-linux-amd64 -p addons-084503 addons disable yakd --alsologtostderr -v=1: (5.912988921s)
--- PASS: TestAddons/parallel/Yakd (11.92s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (8.56s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-084503
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-084503: (8.29110458s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-084503
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-084503
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-084503
--- PASS: TestAddons/StoppedEnableDisable (8.56s)

                                                
                                    
x
+
TestCertOptions (64.33s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-188900 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-188900 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 : (1m2.952920932s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-188900 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-188900 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-188900 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-188900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-188900
E0913 19:26:07.744127   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/skaffold-714540/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestCertOptions (64.33s)

                                                
                                    
x
+
TestCertExpiration (285.04s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-783911 --memory=2048 --cert-expiration=3m --driver=kvm2 
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-783911 --memory=2048 --cert-expiration=3m --driver=kvm2 : (52.573098571s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-783911 --memory=2048 --cert-expiration=8760h --driver=kvm2 
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-783911 --memory=2048 --cert-expiration=8760h --driver=kvm2 : (51.405000842s)
helpers_test.go:175: Cleaning up "cert-expiration-783911" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-783911
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-783911: (1.058543506s)
--- PASS: TestCertExpiration (285.04s)

                                                
                                    
x
+
TestDockerFlags (74s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-251854 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-251854 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 : (1m12.472093757s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-251854 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-251854 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-251854" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-251854
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-251854: (1.057178007s)
--- PASS: TestDockerFlags (74.00s)

                                                
                                    
x
+
TestForceSystemdFlag (55.28s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-545973 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-545973 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 : (54.146314855s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-545973 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-545973" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-545973
--- PASS: TestForceSystemdFlag (55.28s)

                                                
                                    
x
+
TestForceSystemdEnv (61.45s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-768487 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-768487 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 : (59.254207812s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-768487 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-768487" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-768487
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-768487: (1.774558677s)
--- PASS: TestForceSystemdEnv (61.45s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (6.02s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (6.02s)

                                                
                                    
x
+
TestErrorSpam/setup (49.72s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-699351 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-699351 --driver=kvm2 
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-699351 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-699351 --driver=kvm2 : (49.719953356s)
--- PASS: TestErrorSpam/setup (49.72s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-699351 --log_dir /tmp/nospam-699351 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-699351 --log_dir /tmp/nospam-699351 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-699351 --log_dir /tmp/nospam-699351 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.75s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-699351 --log_dir /tmp/nospam-699351 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-699351 --log_dir /tmp/nospam-699351 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-699351 --log_dir /tmp/nospam-699351 status
--- PASS: TestErrorSpam/status (0.75s)

                                                
                                    
x
+
TestErrorSpam/pause (1.19s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-699351 --log_dir /tmp/nospam-699351 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-699351 --log_dir /tmp/nospam-699351 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-699351 --log_dir /tmp/nospam-699351 pause
--- PASS: TestErrorSpam/pause (1.19s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.4s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-699351 --log_dir /tmp/nospam-699351 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-699351 --log_dir /tmp/nospam-699351 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-699351 --log_dir /tmp/nospam-699351 unpause
--- PASS: TestErrorSpam/unpause (1.40s)

                                                
                                    
x
+
TestErrorSpam/stop (15s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-699351 --log_dir /tmp/nospam-699351 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-699351 --log_dir /tmp/nospam-699351 stop: (12.476024814s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-699351 --log_dir /tmp/nospam-699351 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-699351 --log_dir /tmp/nospam-699351 stop: (1.332504678s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-699351 --log_dir /tmp/nospam-699351 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-699351 --log_dir /tmp/nospam-699351 stop: (1.194486542s)
--- PASS: TestErrorSpam/stop (15.00s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19636-3886/.minikube/files/etc/test/nested/copy/11050/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (95.5s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-988520 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-988520 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 : (1m35.501082283s)
--- PASS: TestFunctional/serial/StartWithProxy (95.50s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (40.09s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-988520 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-988520 --alsologtostderr -v=8: (40.090269394s)
functional_test.go:663: soft start took 40.091011785s for "functional-988520" cluster.
--- PASS: TestFunctional/serial/SoftStart (40.09s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-988520 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.15s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.43s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.43s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-988520 /tmp/TestFunctionalserialCacheCmdcacheadd_local2535205144/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 cache add minikube-local-cache-test:functional-988520
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 cache delete minikube-local-cache-test:functional-988520
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-988520
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-988520 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (217.494279ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.15s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 kubectl -- --context functional-988520 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-988520 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (40.23s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-988520 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-988520 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (40.231725482s)
functional_test.go:761: restart took 40.231844904s for "functional-988520" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (40.23s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-988520 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.93s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 logs
--- PASS: TestFunctional/serial/LogsCmd (0.93s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.98s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 logs --file /tmp/TestFunctionalserialLogsFileCmd3484414667/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.98s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.86s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-988520 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-988520
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-988520: exit status 115 (271.742255ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.101:32250 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-988520 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.86s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-988520 config get cpus: exit status 14 (56.44704ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-988520 config get cpus: exit status 14 (51.881414ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (19.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-988520 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-988520 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 21167: os: process already finished
E0913 18:39:29.235997   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/addons-084503/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/DashboardCmd (19.59s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-988520 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-988520 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (256.909829ms)

                                                
                                                
-- stdout --
	* [functional-988520] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19636
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19636-3886/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-3886/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 18:39:23.735852   21703 out.go:345] Setting OutFile to fd 1 ...
	I0913 18:39:23.735962   21703 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:39:23.735972   21703 out.go:358] Setting ErrFile to fd 2...
	I0913 18:39:23.735976   21703 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:39:23.736122   21703 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-3886/.minikube/bin
	I0913 18:39:23.736612   21703 out.go:352] Setting JSON to false
	I0913 18:39:23.737493   21703 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":1311,"bootTime":1726251453,"procs":225,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0913 18:39:23.737593   21703 start.go:139] virtualization: kvm guest
	I0913 18:39:23.770306   21703 out.go:177] * [functional-988520] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0913 18:39:23.771788   21703 notify.go:220] Checking for updates...
	I0913 18:39:23.771807   21703 out.go:177]   - MINIKUBE_LOCATION=19636
	I0913 18:39:23.783352   21703 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 18:39:23.866703   21703 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19636-3886/kubeconfig
	I0913 18:39:23.868101   21703 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-3886/.minikube
	I0913 18:39:23.869256   21703 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0913 18:39:23.871080   21703 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 18:39:23.872724   21703 config.go:182] Loaded profile config "functional-988520": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 18:39:23.873384   21703 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0913 18:39:23.873433   21703 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:39:23.890102   21703 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32819
	I0913 18:39:23.890631   21703 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:39:23.891187   21703 main.go:141] libmachine: Using API Version  1
	I0913 18:39:23.891213   21703 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:39:23.891677   21703 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:39:23.891854   21703 main.go:141] libmachine: (functional-988520) Calling .DriverName
	I0913 18:39:23.892091   21703 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 18:39:23.892428   21703 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0913 18:39:23.892471   21703 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:39:23.908170   21703 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35521
	I0913 18:39:23.908665   21703 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:39:23.909315   21703 main.go:141] libmachine: Using API Version  1
	I0913 18:39:23.909342   21703 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:39:23.909687   21703 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:39:23.909881   21703 main.go:141] libmachine: (functional-988520) Calling .DriverName
	I0913 18:39:23.943426   21703 out.go:177] * Using the kvm2 driver based on existing profile
	I0913 18:39:23.944626   21703 start.go:297] selected driver: kvm2
	I0913 18:39:23.944648   21703 start.go:901] validating driver "kvm2" against &{Name:functional-988520 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-988520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.101 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpir
ation:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 18:39:23.944799   21703 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 18:39:23.946981   21703 out.go:201] 
	W0913 18:39:23.948094   21703 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0913 18:39:23.949294   21703 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-988520 --dry-run --alsologtostderr -v=1 --driver=kvm2 
--- PASS: TestFunctional/parallel/DryRun (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-988520 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-988520 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (147.717123ms)

                                                
                                                
-- stdout --
	* [functional-988520] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19636
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19636-3886/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-3886/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 18:38:57.230261   20437 out.go:345] Setting OutFile to fd 1 ...
	I0913 18:38:57.230566   20437 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:38:57.230577   20437 out.go:358] Setting ErrFile to fd 2...
	I0913 18:38:57.230582   20437 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:38:57.230932   20437 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-3886/.minikube/bin
	I0913 18:38:57.231596   20437 out.go:352] Setting JSON to false
	I0913 18:38:57.232617   20437 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":1284,"bootTime":1726251453,"procs":233,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0913 18:38:57.232710   20437 start.go:139] virtualization: kvm guest
	I0913 18:38:57.235057   20437 out.go:177] * [functional-988520] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I0913 18:38:57.236445   20437 notify.go:220] Checking for updates...
	I0913 18:38:57.236475   20437 out.go:177]   - MINIKUBE_LOCATION=19636
	I0913 18:38:57.237804   20437 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 18:38:57.239199   20437 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19636-3886/kubeconfig
	I0913 18:38:57.240421   20437 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-3886/.minikube
	I0913 18:38:57.241925   20437 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0913 18:38:57.243199   20437 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 18:38:57.245068   20437 config.go:182] Loaded profile config "functional-988520": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 18:38:57.245713   20437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0913 18:38:57.245784   20437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:38:57.261060   20437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44797
	I0913 18:38:57.261576   20437 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:38:57.262123   20437 main.go:141] libmachine: Using API Version  1
	I0913 18:38:57.262150   20437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:38:57.262508   20437 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:38:57.262683   20437 main.go:141] libmachine: (functional-988520) Calling .DriverName
	I0913 18:38:57.262944   20437 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 18:38:57.263383   20437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0913 18:38:57.263422   20437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:38:57.279564   20437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38277
	I0913 18:38:57.280092   20437 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:38:57.280701   20437 main.go:141] libmachine: Using API Version  1
	I0913 18:38:57.280729   20437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:38:57.281039   20437 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:38:57.281232   20437 main.go:141] libmachine: (functional-988520) Calling .DriverName
	I0913 18:38:57.320209   20437 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0913 18:38:57.321533   20437 start.go:297] selected driver: kvm2
	I0913 18:38:57.321552   20437 start.go:901] validating driver "kvm2" against &{Name:functional-988520 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-988520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.101 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 18:38:57.321700   20437 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 18:38:57.323815   20437 out.go:201] 
	W0913 18:38:57.325346   20437 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0913 18:38:57.326563   20437 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-988520 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-988520 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-cpmlv" [995c872f-2131-4c33-a099-52f8c948dba2] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-cpmlv" [995c872f-2131-4c33-a099-52f8c948dba2] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.161853695s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.101:30716
functional_test.go:1675: http://192.168.39.101:30716: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-cpmlv

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.101:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.101:30716
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.81s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (44.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [5fb697bd-db60-47e0-b68a-970d0845feb5] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.00578129s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-988520 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-988520 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-988520 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-988520 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-988520 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [55799e39-721e-42c2-9e47-79f04eab4cc3] Pending
helpers_test.go:344: "sp-pod" [55799e39-721e-42c2-9e47-79f04eab4cc3] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [55799e39-721e-42c2-9e47-79f04eab4cc3] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 25.005213356s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-988520 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-988520 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-988520 delete -f testdata/storage-provisioner/pod.yaml: (1.776259514s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-988520 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [0e76a293-63e8-4a77-ba96-34a5c33f9157] Pending
E0913 18:39:25.392400   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/addons-084503/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "sp-pod" [0e76a293-63e8-4a77-ba96-34a5c33f9157] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0913 18:39:26.674630   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/addons-084503/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "sp-pod" [0e76a293-63e8-4a77-ba96-34a5c33f9157] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.003805371s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-988520 exec sp-pod -- ls /tmp/mount
E0913 18:39:34.358263   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/addons-084503/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (44.98s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 ssh -n functional-988520 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 cp functional-988520:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd41541177/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 ssh -n functional-988520 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 ssh -n functional-988520 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (35.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-988520 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-4rwqw" [643e4111-d8d7-4d5b-a020-6da7cc560722] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-4rwqw" [643e4111-d8d7-4d5b-a020-6da7cc560722] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 27.187460886s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-988520 exec mysql-6cdb49bbb-4rwqw -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-988520 exec mysql-6cdb49bbb-4rwqw -- mysql -ppassword -e "show databases;": exit status 1 (213.511561ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-988520 exec mysql-6cdb49bbb-4rwqw -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-988520 exec mysql-6cdb49bbb-4rwqw -- mysql -ppassword -e "show databases;": exit status 1 (239.886891ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-988520 exec mysql-6cdb49bbb-4rwqw -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-988520 exec mysql-6cdb49bbb-4rwqw -- mysql -ppassword -e "show databases;": exit status 1 (202.880444ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-988520 exec mysql-6cdb49bbb-4rwqw -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-988520 exec mysql-6cdb49bbb-4rwqw -- mysql -ppassword -e "show databases;": exit status 1 (228.27681ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-988520 exec mysql-6cdb49bbb-4rwqw -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (35.02s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/11050/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 ssh "sudo cat /etc/test/nested/copy/11050/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/11050.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 ssh "sudo cat /etc/ssl/certs/11050.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/11050.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 ssh "sudo cat /usr/share/ca-certificates/11050.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/110502.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 ssh "sudo cat /etc/ssl/certs/110502.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/110502.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 ssh "sudo cat /usr/share/ca-certificates/110502.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.44s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-988520 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-988520 ssh "sudo systemctl is-active crio": exit status 1 (202.708424ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-988520 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-988520 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-ktm54" [8cefacf2-52ff-461b-a45c-ebf5cc004bef] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-ktm54" [8cefacf2-52ff-461b-a45c-ebf5cc004bef] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.371602673s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.57s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-988520 docker-env) && out/minikube-linux-amd64 status -p functional-988520"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-988520 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 update-context --alsologtostderr -v=2
E0913 18:39:24.184293   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/addons-084503/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 update-context --alsologtostderr -v=2
E0913 18:39:24.102677   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/addons-084503/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:39:24.109668   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/addons-084503/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:39:24.121034   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/addons-084503/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:39:24.142706   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/addons-084503/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 image ls --format short --alsologtostderr
E0913 18:39:24.266544   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/addons-084503/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-988520 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-988520
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kicbase/echo-server:functional-988520
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-988520 image ls --format short --alsologtostderr:
I0913 18:39:24.246054   21822 out.go:345] Setting OutFile to fd 1 ...
I0913 18:39:24.246332   21822 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 18:39:24.246358   21822 out.go:358] Setting ErrFile to fd 2...
I0913 18:39:24.246365   21822 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 18:39:24.246635   21822 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-3886/.minikube/bin
I0913 18:39:24.247509   21822 config.go:182] Loaded profile config "functional-988520": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0913 18:39:24.247653   21822 config.go:182] Loaded profile config "functional-988520": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0913 18:39:24.248192   21822 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0913 18:39:24.248240   21822 main.go:141] libmachine: Launching plugin server for driver kvm2
I0913 18:39:24.263112   21822 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40949
I0913 18:39:24.264012   21822 main.go:141] libmachine: () Calling .GetVersion
I0913 18:39:24.264674   21822 main.go:141] libmachine: Using API Version  1
I0913 18:39:24.264700   21822 main.go:141] libmachine: () Calling .SetConfigRaw
I0913 18:39:24.265062   21822 main.go:141] libmachine: () Calling .GetMachineName
I0913 18:39:24.265254   21822 main.go:141] libmachine: (functional-988520) Calling .GetState
I0913 18:39:24.267443   21822 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0913 18:39:24.267489   21822 main.go:141] libmachine: Launching plugin server for driver kvm2
I0913 18:39:24.283849   21822 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39497
I0913 18:39:24.284443   21822 main.go:141] libmachine: () Calling .GetVersion
I0913 18:39:24.285158   21822 main.go:141] libmachine: Using API Version  1
I0913 18:39:24.285196   21822 main.go:141] libmachine: () Calling .SetConfigRaw
I0913 18:39:24.285525   21822 main.go:141] libmachine: () Calling .GetMachineName
I0913 18:39:24.285714   21822 main.go:141] libmachine: (functional-988520) Calling .DriverName
I0913 18:39:24.285927   21822 ssh_runner.go:195] Run: systemctl --version
I0913 18:39:24.285969   21822 main.go:141] libmachine: (functional-988520) Calling .GetSSHHostname
I0913 18:39:24.289297   21822 main.go:141] libmachine: (functional-988520) DBG | domain functional-988520 has defined MAC address 52:54:00:47:f6:7c in network mk-functional-988520
I0913 18:39:24.289691   21822 main.go:141] libmachine: (functional-988520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:f6:7c", ip: ""} in network mk-functional-988520: {Iface:virbr1 ExpiryTime:2024-09-13 19:35:54 +0000 UTC Type:0 Mac:52:54:00:47:f6:7c Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:functional-988520 Clientid:01:52:54:00:47:f6:7c}
I0913 18:39:24.289722   21822 main.go:141] libmachine: (functional-988520) DBG | domain functional-988520 has defined IP address 192.168.39.101 and MAC address 52:54:00:47:f6:7c in network mk-functional-988520
I0913 18:39:24.289983   21822 main.go:141] libmachine: (functional-988520) Calling .GetSSHPort
I0913 18:39:24.290181   21822 main.go:141] libmachine: (functional-988520) Calling .GetSSHKeyPath
I0913 18:39:24.290340   21822 main.go:141] libmachine: (functional-988520) Calling .GetSSHUsername
I0913 18:39:24.290484   21822 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3886/.minikube/machines/functional-988520/id_rsa Username:docker}
I0913 18:39:24.381367   21822 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0913 18:39:24.441236   21822 main.go:141] libmachine: Making call to close driver server
I0913 18:39:24.441252   21822 main.go:141] libmachine: (functional-988520) Calling .Close
I0913 18:39:24.441529   21822 main.go:141] libmachine: (functional-988520) DBG | Closing plugin on server side
I0913 18:39:24.441568   21822 main.go:141] libmachine: Successfully made call to close driver server
I0913 18:39:24.441585   21822 main.go:141] libmachine: Making call to close connection to plugin binary
I0913 18:39:24.441594   21822 main.go:141] libmachine: Making call to close driver server
I0913 18:39:24.441602   21822 main.go:141] libmachine: (functional-988520) Calling .Close
I0913 18:39:24.441853   21822 main.go:141] libmachine: Successfully made call to close driver server
I0913 18:39:24.441858   21822 main.go:141] libmachine: (functional-988520) DBG | Closing plugin on server side
I0913 18:39:24.441889   21822 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 image ls --format table --alsologtostderr
E0913 18:39:24.750208   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/addons-084503/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-988520 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/kicbase/echo-server               | functional-988520 | 9056ab77afb8e | 4.94MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/pause                       | 3.10              | 873ed75102791 | 736kB  |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/kube-scheduler              | v1.31.1           | 9aa1fad941575 | 67.4MB |
| registry.k8s.io/coredns/coredns             | v1.11.3           | c69fa2e9cbf5f | 61.8MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/kube-controller-manager     | v1.31.1           | 175ffd71cce3d | 88.4MB |
| registry.k8s.io/kube-proxy                  | v1.31.1           | 60c005f310ff3 | 91.5MB |
| docker.io/library/nginx                     | latest            | 39286ab8a5e14 | 188MB  |
| registry.k8s.io/etcd                        | 3.5.15-0          | 2e96e5913fc06 | 148MB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/minikube-local-cache-test | functional-988520 | ba94a21b06a10 | 30B    |
| registry.k8s.io/kube-apiserver              | v1.31.1           | 6bab7719df100 | 94.2MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-988520 image ls --format table --alsologtostderr:
I0913 18:39:24.797497   21955 out.go:345] Setting OutFile to fd 1 ...
I0913 18:39:24.797652   21955 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 18:39:24.797664   21955 out.go:358] Setting ErrFile to fd 2...
I0913 18:39:24.797677   21955 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 18:39:24.797982   21955 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-3886/.minikube/bin
I0913 18:39:24.798910   21955 config.go:182] Loaded profile config "functional-988520": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0913 18:39:24.799061   21955 config.go:182] Loaded profile config "functional-988520": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0913 18:39:24.799638   21955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0913 18:39:24.799698   21955 main.go:141] libmachine: Launching plugin server for driver kvm2
I0913 18:39:24.814970   21955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34897
I0913 18:39:24.815520   21955 main.go:141] libmachine: () Calling .GetVersion
I0913 18:39:24.816217   21955 main.go:141] libmachine: Using API Version  1
I0913 18:39:24.816241   21955 main.go:141] libmachine: () Calling .SetConfigRaw
I0913 18:39:24.816550   21955 main.go:141] libmachine: () Calling .GetMachineName
I0913 18:39:24.816743   21955 main.go:141] libmachine: (functional-988520) Calling .GetState
I0913 18:39:24.818749   21955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0913 18:39:24.818790   21955 main.go:141] libmachine: Launching plugin server for driver kvm2
I0913 18:39:24.834027   21955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32887
I0913 18:39:24.834712   21955 main.go:141] libmachine: () Calling .GetVersion
I0913 18:39:24.835305   21955 main.go:141] libmachine: Using API Version  1
I0913 18:39:24.835338   21955 main.go:141] libmachine: () Calling .SetConfigRaw
I0913 18:39:24.835682   21955 main.go:141] libmachine: () Calling .GetMachineName
I0913 18:39:24.835841   21955 main.go:141] libmachine: (functional-988520) Calling .DriverName
I0913 18:39:24.836026   21955 ssh_runner.go:195] Run: systemctl --version
I0913 18:39:24.836062   21955 main.go:141] libmachine: (functional-988520) Calling .GetSSHHostname
I0913 18:39:24.838967   21955 main.go:141] libmachine: (functional-988520) DBG | domain functional-988520 has defined MAC address 52:54:00:47:f6:7c in network mk-functional-988520
I0913 18:39:24.839403   21955 main.go:141] libmachine: (functional-988520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:f6:7c", ip: ""} in network mk-functional-988520: {Iface:virbr1 ExpiryTime:2024-09-13 19:35:54 +0000 UTC Type:0 Mac:52:54:00:47:f6:7c Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:functional-988520 Clientid:01:52:54:00:47:f6:7c}
I0913 18:39:24.839434   21955 main.go:141] libmachine: (functional-988520) DBG | domain functional-988520 has defined IP address 192.168.39.101 and MAC address 52:54:00:47:f6:7c in network mk-functional-988520
I0913 18:39:24.839582   21955 main.go:141] libmachine: (functional-988520) Calling .GetSSHPort
I0913 18:39:24.839766   21955 main.go:141] libmachine: (functional-988520) Calling .GetSSHKeyPath
I0913 18:39:24.839929   21955 main.go:141] libmachine: (functional-988520) Calling .GetSSHUsername
I0913 18:39:24.840072   21955 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3886/.minikube/machines/functional-988520/id_rsa Username:docker}
I0913 18:39:24.932910   21955 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0913 18:39:24.979896   21955 main.go:141] libmachine: Making call to close driver server
I0913 18:39:24.979914   21955 main.go:141] libmachine: (functional-988520) Calling .Close
I0913 18:39:24.980162   21955 main.go:141] libmachine: Successfully made call to close driver server
I0913 18:39:24.980206   21955 main.go:141] libmachine: Making call to close connection to plugin binary
I0913 18:39:24.980190   21955 main.go:141] libmachine: (functional-988520) DBG | Closing plugin on server side
I0913 18:39:24.980230   21955 main.go:141] libmachine: Making call to close driver server
I0913 18:39:24.980239   21955 main.go:141] libmachine: (functional-988520) Calling .Close
I0913 18:39:24.980446   21955 main.go:141] libmachine: (functional-988520) DBG | Closing plugin on server side
I0913 18:39:24.980462   21955 main.go:141] libmachine: Successfully made call to close driver server
I0913 18:39:24.980500   21955 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-988520 image ls --format json --alsologtostderr:
[{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"ba94a21b06a1009a7cd1f941c5f747133023c435b6f325f831d15b8a2f56d47e","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-988520"],"size":"30"},{"id":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"67400000"},{"id":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"88400000"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"736000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc
2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"94200000"},{"id":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"91500000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/ku
bernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-988520"],"size":"4940000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"61800000"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"148000000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-988520 image ls --format json --alsologtostderr:
I0913 18:39:24.560888   21903 out.go:345] Setting OutFile to fd 1 ...
I0913 18:39:24.561021   21903 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 18:39:24.561033   21903 out.go:358] Setting ErrFile to fd 2...
I0913 18:39:24.561038   21903 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 18:39:24.561327   21903 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-3886/.minikube/bin
I0913 18:39:24.562220   21903 config.go:182] Loaded profile config "functional-988520": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0913 18:39:24.562437   21903 config.go:182] Loaded profile config "functional-988520": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0913 18:39:24.563003   21903 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0913 18:39:24.563052   21903 main.go:141] libmachine: Launching plugin server for driver kvm2
I0913 18:39:24.578126   21903 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45589
I0913 18:39:24.578589   21903 main.go:141] libmachine: () Calling .GetVersion
I0913 18:39:24.579122   21903 main.go:141] libmachine: Using API Version  1
I0913 18:39:24.579147   21903 main.go:141] libmachine: () Calling .SetConfigRaw
I0913 18:39:24.579469   21903 main.go:141] libmachine: () Calling .GetMachineName
I0913 18:39:24.579654   21903 main.go:141] libmachine: (functional-988520) Calling .GetState
I0913 18:39:24.581624   21903 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0913 18:39:24.581673   21903 main.go:141] libmachine: Launching plugin server for driver kvm2
I0913 18:39:24.597212   21903 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44107
I0913 18:39:24.597656   21903 main.go:141] libmachine: () Calling .GetVersion
I0913 18:39:24.598218   21903 main.go:141] libmachine: Using API Version  1
I0913 18:39:24.598241   21903 main.go:141] libmachine: () Calling .SetConfigRaw
I0913 18:39:24.598615   21903 main.go:141] libmachine: () Calling .GetMachineName
I0913 18:39:24.598788   21903 main.go:141] libmachine: (functional-988520) Calling .DriverName
I0913 18:39:24.599038   21903 ssh_runner.go:195] Run: systemctl --version
I0913 18:39:24.599068   21903 main.go:141] libmachine: (functional-988520) Calling .GetSSHHostname
I0913 18:39:24.602069   21903 main.go:141] libmachine: (functional-988520) DBG | domain functional-988520 has defined MAC address 52:54:00:47:f6:7c in network mk-functional-988520
I0913 18:39:24.602499   21903 main.go:141] libmachine: (functional-988520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:f6:7c", ip: ""} in network mk-functional-988520: {Iface:virbr1 ExpiryTime:2024-09-13 19:35:54 +0000 UTC Type:0 Mac:52:54:00:47:f6:7c Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:functional-988520 Clientid:01:52:54:00:47:f6:7c}
I0913 18:39:24.602528   21903 main.go:141] libmachine: (functional-988520) DBG | domain functional-988520 has defined IP address 192.168.39.101 and MAC address 52:54:00:47:f6:7c in network mk-functional-988520
I0913 18:39:24.602663   21903 main.go:141] libmachine: (functional-988520) Calling .GetSSHPort
I0913 18:39:24.602818   21903 main.go:141] libmachine: (functional-988520) Calling .GetSSHKeyPath
I0913 18:39:24.602943   21903 main.go:141] libmachine: (functional-988520) Calling .GetSSHUsername
I0913 18:39:24.603051   21903 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3886/.minikube/machines/functional-988520/id_rsa Username:docker}
I0913 18:39:24.697077   21903 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0913 18:39:24.737394   21903 main.go:141] libmachine: Making call to close driver server
I0913 18:39:24.737412   21903 main.go:141] libmachine: (functional-988520) Calling .Close
I0913 18:39:24.737720   21903 main.go:141] libmachine: Successfully made call to close driver server
I0913 18:39:24.737753   21903 main.go:141] libmachine: Making call to close connection to plugin binary
I0913 18:39:24.737763   21903 main.go:141] libmachine: Making call to close driver server
I0913 18:39:24.737772   21903 main.go:141] libmachine: (functional-988520) Calling .Close
I0913 18:39:24.738000   21903 main.go:141] libmachine: (functional-988520) DBG | Closing plugin on server side
I0913 18:39:24.738045   21903 main.go:141] libmachine: Successfully made call to close driver server
I0913 18:39:24.738053   21903 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 image ls --format yaml --alsologtostderr
E0913 18:39:24.428764   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/addons-084503/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-988520 image ls --format yaml --alsologtostderr:
- id: 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "94200000"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61800000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "88400000"
- id: 39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "148000000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: ba94a21b06a1009a7cd1f941c5f747133023c435b6f325f831d15b8a2f56d47e
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-988520
size: "30"
- id: 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "67400000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "91500000"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "736000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-988520
size: "4940000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-988520 image ls --format yaml --alsologtostderr:
I0913 18:39:24.319025   21857 out.go:345] Setting OutFile to fd 1 ...
I0913 18:39:24.319287   21857 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 18:39:24.319296   21857 out.go:358] Setting ErrFile to fd 2...
I0913 18:39:24.319300   21857 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 18:39:24.319500   21857 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-3886/.minikube/bin
I0913 18:39:24.320071   21857 config.go:182] Loaded profile config "functional-988520": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0913 18:39:24.320164   21857 config.go:182] Loaded profile config "functional-988520": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0913 18:39:24.320536   21857 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0913 18:39:24.320580   21857 main.go:141] libmachine: Launching plugin server for driver kvm2
I0913 18:39:24.336018   21857 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33063
I0913 18:39:24.336491   21857 main.go:141] libmachine: () Calling .GetVersion
I0913 18:39:24.337048   21857 main.go:141] libmachine: Using API Version  1
I0913 18:39:24.337068   21857 main.go:141] libmachine: () Calling .SetConfigRaw
I0913 18:39:24.337367   21857 main.go:141] libmachine: () Calling .GetMachineName
I0913 18:39:24.337548   21857 main.go:141] libmachine: (functional-988520) Calling .GetState
I0913 18:39:24.339386   21857 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0913 18:39:24.339424   21857 main.go:141] libmachine: Launching plugin server for driver kvm2
I0913 18:39:24.354614   21857 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45227
I0913 18:39:24.355109   21857 main.go:141] libmachine: () Calling .GetVersion
I0913 18:39:24.355735   21857 main.go:141] libmachine: Using API Version  1
I0913 18:39:24.355773   21857 main.go:141] libmachine: () Calling .SetConfigRaw
I0913 18:39:24.356142   21857 main.go:141] libmachine: () Calling .GetMachineName
I0913 18:39:24.356350   21857 main.go:141] libmachine: (functional-988520) Calling .DriverName
I0913 18:39:24.356544   21857 ssh_runner.go:195] Run: systemctl --version
I0913 18:39:24.356579   21857 main.go:141] libmachine: (functional-988520) Calling .GetSSHHostname
I0913 18:39:24.359541   21857 main.go:141] libmachine: (functional-988520) DBG | domain functional-988520 has defined MAC address 52:54:00:47:f6:7c in network mk-functional-988520
I0913 18:39:24.360013   21857 main.go:141] libmachine: (functional-988520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:f6:7c", ip: ""} in network mk-functional-988520: {Iface:virbr1 ExpiryTime:2024-09-13 19:35:54 +0000 UTC Type:0 Mac:52:54:00:47:f6:7c Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:functional-988520 Clientid:01:52:54:00:47:f6:7c}
I0913 18:39:24.360039   21857 main.go:141] libmachine: (functional-988520) DBG | domain functional-988520 has defined IP address 192.168.39.101 and MAC address 52:54:00:47:f6:7c in network mk-functional-988520
I0913 18:39:24.360206   21857 main.go:141] libmachine: (functional-988520) Calling .GetSSHPort
I0913 18:39:24.360387   21857 main.go:141] libmachine: (functional-988520) Calling .GetSSHKeyPath
I0913 18:39:24.360513   21857 main.go:141] libmachine: (functional-988520) Calling .GetSSHUsername
I0913 18:39:24.360636   21857 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3886/.minikube/machines/functional-988520/id_rsa Username:docker}
I0913 18:39:24.469564   21857 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0913 18:39:24.507073   21857 main.go:141] libmachine: Making call to close driver server
I0913 18:39:24.507090   21857 main.go:141] libmachine: (functional-988520) Calling .Close
I0913 18:39:24.507391   21857 main.go:141] libmachine: (functional-988520) DBG | Closing plugin on server side
I0913 18:39:24.507408   21857 main.go:141] libmachine: Successfully made call to close driver server
I0913 18:39:24.507423   21857 main.go:141] libmachine: Making call to close connection to plugin binary
I0913 18:39:24.507434   21857 main.go:141] libmachine: Making call to close driver server
I0913 18:39:24.507444   21857 main.go:141] libmachine: (functional-988520) Calling .Close
I0913 18:39:24.507637   21857 main.go:141] libmachine: (functional-988520) DBG | Closing plugin on server side
I0913 18:39:24.507690   21857 main.go:141] libmachine: Successfully made call to close driver server
I0913 18:39:24.507712   21857 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-988520 ssh pgrep buildkitd: exit status 1 (233.642772ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 image build -t localhost/my-image:functional-988520 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-988520 image build -t localhost/my-image:functional-988520 testdata/build --alsologtostderr: (3.079919808s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-988520 image build -t localhost/my-image:functional-988520 testdata/build --alsologtostderr:
I0913 18:39:24.722595   21937 out.go:345] Setting OutFile to fd 1 ...
I0913 18:39:24.722731   21937 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 18:39:24.722740   21937 out.go:358] Setting ErrFile to fd 2...
I0913 18:39:24.722745   21937 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 18:39:24.722916   21937 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-3886/.minikube/bin
I0913 18:39:24.723482   21937 config.go:182] Loaded profile config "functional-988520": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0913 18:39:24.723988   21937 config.go:182] Loaded profile config "functional-988520": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0913 18:39:24.724337   21937 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0913 18:39:24.724373   21937 main.go:141] libmachine: Launching plugin server for driver kvm2
I0913 18:39:24.740086   21937 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37389
I0913 18:39:24.740625   21937 main.go:141] libmachine: () Calling .GetVersion
I0913 18:39:24.741179   21937 main.go:141] libmachine: Using API Version  1
I0913 18:39:24.741200   21937 main.go:141] libmachine: () Calling .SetConfigRaw
I0913 18:39:24.741560   21937 main.go:141] libmachine: () Calling .GetMachineName
I0913 18:39:24.741750   21937 main.go:141] libmachine: (functional-988520) Calling .GetState
I0913 18:39:24.743702   21937 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0913 18:39:24.743753   21937 main.go:141] libmachine: Launching plugin server for driver kvm2
I0913 18:39:24.759909   21937 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40877
I0913 18:39:24.760383   21937 main.go:141] libmachine: () Calling .GetVersion
I0913 18:39:24.760972   21937 main.go:141] libmachine: Using API Version  1
I0913 18:39:24.761005   21937 main.go:141] libmachine: () Calling .SetConfigRaw
I0913 18:39:24.761338   21937 main.go:141] libmachine: () Calling .GetMachineName
I0913 18:39:24.761512   21937 main.go:141] libmachine: (functional-988520) Calling .DriverName
I0913 18:39:24.761732   21937 ssh_runner.go:195] Run: systemctl --version
I0913 18:39:24.761760   21937 main.go:141] libmachine: (functional-988520) Calling .GetSSHHostname
I0913 18:39:24.764781   21937 main.go:141] libmachine: (functional-988520) DBG | domain functional-988520 has defined MAC address 52:54:00:47:f6:7c in network mk-functional-988520
I0913 18:39:24.765146   21937 main.go:141] libmachine: (functional-988520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:f6:7c", ip: ""} in network mk-functional-988520: {Iface:virbr1 ExpiryTime:2024-09-13 19:35:54 +0000 UTC Type:0 Mac:52:54:00:47:f6:7c Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:functional-988520 Clientid:01:52:54:00:47:f6:7c}
I0913 18:39:24.765172   21937 main.go:141] libmachine: (functional-988520) DBG | domain functional-988520 has defined IP address 192.168.39.101 and MAC address 52:54:00:47:f6:7c in network mk-functional-988520
I0913 18:39:24.765440   21937 main.go:141] libmachine: (functional-988520) Calling .GetSSHPort
I0913 18:39:24.765608   21937 main.go:141] libmachine: (functional-988520) Calling .GetSSHKeyPath
I0913 18:39:24.765749   21937 main.go:141] libmachine: (functional-988520) Calling .GetSSHUsername
I0913 18:39:24.765855   21937 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3886/.minikube/machines/functional-988520/id_rsa Username:docker}
I0913 18:39:24.866796   21937 build_images.go:161] Building image from path: /tmp/build.4169051053.tar
I0913 18:39:24.866878   21937 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0913 18:39:24.882619   21937 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4169051053.tar
I0913 18:39:24.889647   21937 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4169051053.tar: stat -c "%s %y" /var/lib/minikube/build/build.4169051053.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4169051053.tar': No such file or directory
I0913 18:39:24.889687   21937 ssh_runner.go:362] scp /tmp/build.4169051053.tar --> /var/lib/minikube/build/build.4169051053.tar (3072 bytes)
I0913 18:39:24.925235   21937 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4169051053
I0913 18:39:24.950058   21937 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4169051053 -xf /var/lib/minikube/build/build.4169051053.tar
I0913 18:39:24.963072   21937 docker.go:360] Building image: /var/lib/minikube/build/build.4169051053
I0913 18:39:24.963140   21937 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-988520 /var/lib/minikube/build/build.4169051053
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.1s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.3s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 writing image sha256:939d45e70f2aa3a68d15dde0837816a197f87bb057d5b8aa5878c3ca7b85f202 done
#8 naming to localhost/my-image:functional-988520 done
#8 DONE 0.1s
I0913 18:39:27.722844   21937 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-988520 /var/lib/minikube/build/build.4169051053: (2.759675223s)
I0913 18:39:27.722921   21937 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4169051053
I0913 18:39:27.743049   21937 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4169051053.tar
I0913 18:39:27.755618   21937 build_images.go:217] Built localhost/my-image:functional-988520 from /tmp/build.4169051053.tar
I0913 18:39:27.755662   21937 build_images.go:133] succeeded building to: functional-988520
I0913 18:39:27.755666   21937 build_images.go:134] failed building to: 
I0913 18:39:27.755689   21937 main.go:141] libmachine: Making call to close driver server
I0913 18:39:27.755700   21937 main.go:141] libmachine: (functional-988520) Calling .Close
I0913 18:39:27.756005   21937 main.go:141] libmachine: Successfully made call to close driver server
I0913 18:39:27.756025   21937 main.go:141] libmachine: Making call to close connection to plugin binary
I0913 18:39:27.756034   21937 main.go:141] libmachine: Making call to close driver server
I0913 18:39:27.756042   21937 main.go:141] libmachine: (functional-988520) Calling .Close
I0913 18:39:27.756251   21937 main.go:141] libmachine: Successfully made call to close driver server
I0913 18:39:27.756265   21937 main.go:141] libmachine: Making call to close connection to plugin binary
I0913 18:39:27.756279   21937 main.go:141] libmachine: (functional-988520) DBG | Closing plugin on server side
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 image ls
2024/09/13 18:39:28 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.59409013s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-988520
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 image load --daemon kicbase/echo-server:functional-988520 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 image load --daemon kicbase/echo-server:functional-988520 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-988520
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 image load --daemon kicbase/echo-server:functional-988520 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 image save kicbase/echo-server:functional-988520 /home/jenkins/workspace/KVM_Linux_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 image rm kicbase/echo-server:functional-988520 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 image load /home/jenkins/workspace/KVM_Linux_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-988520
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 image save --daemon kicbase/echo-server:functional-988520 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-988520
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "585.941473ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "64.821097ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "398.635936ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "171.123544ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 service list -o json
functional_test.go:1494: Took "322.049134ms" to run "out/minikube-linux-amd64 -p functional-988520 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (21.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-988520 /tmp/TestFunctionalparallelMountCmdany-port1844061492/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726252738861843068" to /tmp/TestFunctionalparallelMountCmdany-port1844061492/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726252738861843068" to /tmp/TestFunctionalparallelMountCmdany-port1844061492/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726252738861843068" to /tmp/TestFunctionalparallelMountCmdany-port1844061492/001/test-1726252738861843068
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-988520 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (249.977556ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 13 18:38 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 13 18:38 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 13 18:38 test-1726252738861843068
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 ssh cat /mount-9p/test-1726252738861843068
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-988520 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [2cd6891c-b84c-4bf1-9824-c092e9ef8fbd] Pending
helpers_test.go:344: "busybox-mount" [2cd6891c-b84c-4bf1-9824-c092e9ef8fbd] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [2cd6891c-b84c-4bf1-9824-c092e9ef8fbd] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [2cd6891c-b84c-4bf1-9824-c092e9ef8fbd] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 19.00405533s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-988520 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-988520 /tmp/TestFunctionalparallelMountCmdany-port1844061492/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (21.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.101:31639
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.101:31639
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-988520 /tmp/TestFunctionalparallelMountCmdspecific-port646902609/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-988520 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (225.550537ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-988520 /tmp/TestFunctionalparallelMountCmdspecific-port646902609/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-988520 ssh "sudo umount -f /mount-9p": exit status 1 (207.341228ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-988520 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-988520 /tmp/TestFunctionalparallelMountCmdspecific-port646902609/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.83s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-988520 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1717357061/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-988520 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1717357061/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-988520 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1717357061/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-988520 ssh "findmnt -T" /mount1: exit status 1 (262.69711ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-988520 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-988520 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-988520 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1717357061/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-988520 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1717357061/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-988520 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1717357061/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.45s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-988520
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-988520
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-988520
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestGvisorAddon (185.2s)

                                                
                                                
=== RUN   TestGvisorAddon
=== PAUSE TestGvisorAddon

                                                
                                                

                                                
                                                
=== CONT  TestGvisorAddon
gvisor_addon_test.go:52: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-419875 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
E0913 19:23:23.881093   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/skaffold-714540/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:23:23.887801   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/skaffold-714540/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:23:23.899465   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/skaffold-714540/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:23:23.921150   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/skaffold-714540/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:23:23.962861   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/skaffold-714540/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:23:24.044483   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/skaffold-714540/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:23:24.206424   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/skaffold-714540/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:23:24.528590   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/skaffold-714540/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:23:25.170714   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/skaffold-714540/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:23:26.452841   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/skaffold-714540/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:23:29.014644   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/skaffold-714540/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:23:34.137014   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/skaffold-714540/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:23:44.378869   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/skaffold-714540/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:23:47.848520   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/functional-988520/client.crt: no such file or directory" logger="UnhandledError"
gvisor_addon_test.go:52: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-419875 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (1m3.71015112s)
gvisor_addon_test.go:58: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-419875 cache add gcr.io/k8s-minikube/gvisor-addon:2
E0913 19:24:24.102240   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/addons-084503/client.crt: no such file or directory" logger="UnhandledError"
gvisor_addon_test.go:58: (dbg) Done: out/minikube-linux-amd64 -p gvisor-419875 cache add gcr.io/k8s-minikube/gvisor-addon:2: (23.585738942s)
gvisor_addon_test.go:63: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-419875 addons enable gvisor
gvisor_addon_test.go:63: (dbg) Done: out/minikube-linux-amd64 -p gvisor-419875 addons enable gvisor: (3.743646748s)
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [42ca579e-73a4-49e9-b20f-27b30c6ded67] Running
E0913 19:24:45.822006   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/skaffold-714540/client.crt: no such file or directory" logger="UnhandledError"
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 6.005002449s
gvisor_addon_test.go:73: (dbg) Run:  kubectl --context gvisor-419875 replace --force -f testdata/nginx-gvisor.yaml
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [1d4cff74-045a-428d-8e20-b1800c5aa437] Pending
helpers_test.go:344: "nginx-gvisor" [1d4cff74-045a-428d-8e20-b1800c5aa437] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-gvisor" [1d4cff74-045a-428d-8e20-b1800c5aa437] Running
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 25.005521295s
gvisor_addon_test.go:83: (dbg) Run:  out/minikube-linux-amd64 stop -p gvisor-419875
gvisor_addon_test.go:83: (dbg) Done: out/minikube-linux-amd64 stop -p gvisor-419875: (2.313095178s)
gvisor_addon_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-419875 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-419875 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (48.734619066s)
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [42ca579e-73a4-49e9-b20f-27b30c6ded67] Running
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 6.005292604s
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [1d4cff74-045a-428d-8e20-b1800c5aa437] Running / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 5.004699933s
helpers_test.go:175: Cleaning up "gvisor-419875" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p gvisor-419875
--- PASS: TestGvisorAddon (185.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (215.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-602459 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2 
E0913 18:39:44.600213   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/addons-084503/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:40:05.081689   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/addons-084503/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:40:46.043003   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/addons-084503/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:42:07.965483   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/addons-084503/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-602459 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2 : (3m35.207411472s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-602459 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (215.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-602459 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-602459 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-602459 -- rollout status deployment/busybox: (3.060909104s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-602459 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-602459 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-602459 -- exec busybox-7dff88458-fnr2l -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-602459 -- exec busybox-7dff88458-sk9zj -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-602459 -- exec busybox-7dff88458-zjwwn -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-602459 -- exec busybox-7dff88458-fnr2l -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-602459 -- exec busybox-7dff88458-sk9zj -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-602459 -- exec busybox-7dff88458-zjwwn -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-602459 -- exec busybox-7dff88458-fnr2l -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-602459 -- exec busybox-7dff88458-sk9zj -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-602459 -- exec busybox-7dff88458-zjwwn -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-602459 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-602459 -- exec busybox-7dff88458-fnr2l -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-602459 -- exec busybox-7dff88458-fnr2l -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-602459 -- exec busybox-7dff88458-sk9zj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-602459 -- exec busybox-7dff88458-sk9zj -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-602459 -- exec busybox-7dff88458-zjwwn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-602459 -- exec busybox-7dff88458-zjwwn -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (65.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-602459 -v=7 --alsologtostderr
E0913 18:43:47.848507   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/functional-988520/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:43:47.854899   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/functional-988520/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:43:47.866321   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/functional-988520/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:43:47.887781   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/functional-988520/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:43:47.929906   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/functional-988520/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:43:48.011396   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/functional-988520/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:43:48.172655   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/functional-988520/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:43:48.494732   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/functional-988520/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:43:49.136812   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/functional-988520/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:43:50.418812   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/functional-988520/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:43:52.980977   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/functional-988520/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:43:58.103309   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/functional-988520/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:44:08.344666   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/functional-988520/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-602459 -v=7 --alsologtostderr: (1m4.674697603s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-602459 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (65.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-602459 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E0913 18:44:24.103055   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/addons-084503/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-602459 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-602459 cp testdata/cp-test.txt ha-602459:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-602459 ssh -n ha-602459 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-602459 cp ha-602459:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile804359218/001/cp-test_ha-602459.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-602459 ssh -n ha-602459 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-602459 cp ha-602459:/home/docker/cp-test.txt ha-602459-m02:/home/docker/cp-test_ha-602459_ha-602459-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-602459 ssh -n ha-602459 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-602459 ssh -n ha-602459-m02 "sudo cat /home/docker/cp-test_ha-602459_ha-602459-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-602459 cp ha-602459:/home/docker/cp-test.txt ha-602459-m03:/home/docker/cp-test_ha-602459_ha-602459-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-602459 ssh -n ha-602459 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-602459 ssh -n ha-602459-m03 "sudo cat /home/docker/cp-test_ha-602459_ha-602459-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-602459 cp ha-602459:/home/docker/cp-test.txt ha-602459-m04:/home/docker/cp-test_ha-602459_ha-602459-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-602459 ssh -n ha-602459 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-602459 ssh -n ha-602459-m04 "sudo cat /home/docker/cp-test_ha-602459_ha-602459-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-602459 cp testdata/cp-test.txt ha-602459-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-602459 ssh -n ha-602459-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-602459 cp ha-602459-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile804359218/001/cp-test_ha-602459-m02.txt
E0913 18:44:28.826681   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/functional-988520/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-602459 ssh -n ha-602459-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-602459 cp ha-602459-m02:/home/docker/cp-test.txt ha-602459:/home/docker/cp-test_ha-602459-m02_ha-602459.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-602459 ssh -n ha-602459-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-602459 ssh -n ha-602459 "sudo cat /home/docker/cp-test_ha-602459-m02_ha-602459.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-602459 cp ha-602459-m02:/home/docker/cp-test.txt ha-602459-m03:/home/docker/cp-test_ha-602459-m02_ha-602459-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-602459 ssh -n ha-602459-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-602459 ssh -n ha-602459-m03 "sudo cat /home/docker/cp-test_ha-602459-m02_ha-602459-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-602459 cp ha-602459-m02:/home/docker/cp-test.txt ha-602459-m04:/home/docker/cp-test_ha-602459-m02_ha-602459-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-602459 ssh -n ha-602459-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-602459 ssh -n ha-602459-m04 "sudo cat /home/docker/cp-test_ha-602459-m02_ha-602459-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-602459 cp testdata/cp-test.txt ha-602459-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-602459 ssh -n ha-602459-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-602459 cp ha-602459-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile804359218/001/cp-test_ha-602459-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-602459 ssh -n ha-602459-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-602459 cp ha-602459-m03:/home/docker/cp-test.txt ha-602459:/home/docker/cp-test_ha-602459-m03_ha-602459.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-602459 ssh -n ha-602459-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-602459 ssh -n ha-602459 "sudo cat /home/docker/cp-test_ha-602459-m03_ha-602459.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-602459 cp ha-602459-m03:/home/docker/cp-test.txt ha-602459-m02:/home/docker/cp-test_ha-602459-m03_ha-602459-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-602459 ssh -n ha-602459-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-602459 ssh -n ha-602459-m02 "sudo cat /home/docker/cp-test_ha-602459-m03_ha-602459-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-602459 cp ha-602459-m03:/home/docker/cp-test.txt ha-602459-m04:/home/docker/cp-test_ha-602459-m03_ha-602459-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-602459 ssh -n ha-602459-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-602459 ssh -n ha-602459-m04 "sudo cat /home/docker/cp-test_ha-602459-m03_ha-602459-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-602459 cp testdata/cp-test.txt ha-602459-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-602459 ssh -n ha-602459-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-602459 cp ha-602459-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile804359218/001/cp-test_ha-602459-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-602459 ssh -n ha-602459-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-602459 cp ha-602459-m04:/home/docker/cp-test.txt ha-602459:/home/docker/cp-test_ha-602459-m04_ha-602459.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-602459 ssh -n ha-602459-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-602459 ssh -n ha-602459 "sudo cat /home/docker/cp-test_ha-602459-m04_ha-602459.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-602459 cp ha-602459-m04:/home/docker/cp-test.txt ha-602459-m02:/home/docker/cp-test_ha-602459-m04_ha-602459-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-602459 ssh -n ha-602459-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-602459 ssh -n ha-602459-m02 "sudo cat /home/docker/cp-test_ha-602459-m04_ha-602459-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-602459 cp ha-602459-m04:/home/docker/cp-test.txt ha-602459-m03:/home/docker/cp-test_ha-602459-m04_ha-602459-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-602459 ssh -n ha-602459-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-602459 ssh -n ha-602459-m03 "sudo cat /home/docker/cp-test_ha-602459-m04_ha-602459-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-602459 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-602459 node stop m02 -v=7 --alsologtostderr: (13.308275064s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-602459 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-602459 status -v=7 --alsologtostderr: exit status 7 (622.702364ms)

                                                
                                                
-- stdout --
	ha-602459
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-602459-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-602459-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-602459-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 18:44:50.797405   26376 out.go:345] Setting OutFile to fd 1 ...
	I0913 18:44:50.797519   26376 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:44:50.797528   26376 out.go:358] Setting ErrFile to fd 2...
	I0913 18:44:50.797532   26376 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:44:50.797735   26376 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-3886/.minikube/bin
	I0913 18:44:50.797896   26376 out.go:352] Setting JSON to false
	I0913 18:44:50.797927   26376 mustload.go:65] Loading cluster: ha-602459
	I0913 18:44:50.798049   26376 notify.go:220] Checking for updates...
	I0913 18:44:50.798482   26376 config.go:182] Loaded profile config "ha-602459": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 18:44:50.798500   26376 status.go:255] checking status of ha-602459 ...
	I0913 18:44:50.798999   26376 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0913 18:44:50.799060   26376 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:44:50.816785   26376 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44321
	I0913 18:44:50.817293   26376 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:44:50.817936   26376 main.go:141] libmachine: Using API Version  1
	I0913 18:44:50.817952   26376 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:44:50.818272   26376 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:44:50.818471   26376 main.go:141] libmachine: (ha-602459) Calling .GetState
	I0913 18:44:50.820163   26376 status.go:330] ha-602459 host status = "Running" (err=<nil>)
	I0913 18:44:50.820178   26376 host.go:66] Checking if "ha-602459" exists ...
	I0913 18:44:50.820492   26376 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0913 18:44:50.820535   26376 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:44:50.835398   26376 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46247
	I0913 18:44:50.835845   26376 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:44:50.836311   26376 main.go:141] libmachine: Using API Version  1
	I0913 18:44:50.836332   26376 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:44:50.836656   26376 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:44:50.836815   26376 main.go:141] libmachine: (ha-602459) Calling .GetIP
	I0913 18:44:50.839387   26376 main.go:141] libmachine: (ha-602459) DBG | domain ha-602459 has defined MAC address 52:54:00:54:01:50 in network mk-ha-602459
	I0913 18:44:50.839799   26376 main.go:141] libmachine: (ha-602459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:01:50", ip: ""} in network mk-ha-602459: {Iface:virbr1 ExpiryTime:2024-09-13 19:39:49 +0000 UTC Type:0 Mac:52:54:00:54:01:50 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-602459 Clientid:01:52:54:00:54:01:50}
	I0913 18:44:50.839827   26376 main.go:141] libmachine: (ha-602459) DBG | domain ha-602459 has defined IP address 192.168.39.43 and MAC address 52:54:00:54:01:50 in network mk-ha-602459
	I0913 18:44:50.839938   26376 host.go:66] Checking if "ha-602459" exists ...
	I0913 18:44:50.840214   26376 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0913 18:44:50.840255   26376 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:44:50.855416   26376 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41113
	I0913 18:44:50.855780   26376 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:44:50.856253   26376 main.go:141] libmachine: Using API Version  1
	I0913 18:44:50.856279   26376 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:44:50.856654   26376 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:44:50.856851   26376 main.go:141] libmachine: (ha-602459) Calling .DriverName
	I0913 18:44:50.857073   26376 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0913 18:44:50.857114   26376 main.go:141] libmachine: (ha-602459) Calling .GetSSHHostname
	I0913 18:44:50.860184   26376 main.go:141] libmachine: (ha-602459) DBG | domain ha-602459 has defined MAC address 52:54:00:54:01:50 in network mk-ha-602459
	I0913 18:44:50.860662   26376 main.go:141] libmachine: (ha-602459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:01:50", ip: ""} in network mk-ha-602459: {Iface:virbr1 ExpiryTime:2024-09-13 19:39:49 +0000 UTC Type:0 Mac:52:54:00:54:01:50 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-602459 Clientid:01:52:54:00:54:01:50}
	I0913 18:44:50.860689   26376 main.go:141] libmachine: (ha-602459) DBG | domain ha-602459 has defined IP address 192.168.39.43 and MAC address 52:54:00:54:01:50 in network mk-ha-602459
	I0913 18:44:50.860858   26376 main.go:141] libmachine: (ha-602459) Calling .GetSSHPort
	I0913 18:44:50.861027   26376 main.go:141] libmachine: (ha-602459) Calling .GetSSHKeyPath
	I0913 18:44:50.861186   26376 main.go:141] libmachine: (ha-602459) Calling .GetSSHUsername
	I0913 18:44:50.861340   26376 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3886/.minikube/machines/ha-602459/id_rsa Username:docker}
	I0913 18:44:50.945734   26376 ssh_runner.go:195] Run: systemctl --version
	I0913 18:44:50.951683   26376 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 18:44:50.969309   26376 kubeconfig.go:125] found "ha-602459" server: "https://192.168.39.254:8443"
	I0913 18:44:50.969341   26376 api_server.go:166] Checking apiserver status ...
	I0913 18:44:50.969372   26376 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 18:44:50.985873   26376 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1850/cgroup
	W0913 18:44:50.997376   26376 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1850/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0913 18:44:50.997441   26376 ssh_runner.go:195] Run: ls
	I0913 18:44:51.001861   26376 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0913 18:44:51.008797   26376 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0913 18:44:51.008824   26376 status.go:422] ha-602459 apiserver status = Running (err=<nil>)
	I0913 18:44:51.008836   26376 status.go:257] ha-602459 status: &{Name:ha-602459 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0913 18:44:51.008863   26376 status.go:255] checking status of ha-602459-m02 ...
	I0913 18:44:51.009166   26376 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0913 18:44:51.009211   26376 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:44:51.023686   26376 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42145
	I0913 18:44:51.024184   26376 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:44:51.024669   26376 main.go:141] libmachine: Using API Version  1
	I0913 18:44:51.024686   26376 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:44:51.024988   26376 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:44:51.025154   26376 main.go:141] libmachine: (ha-602459-m02) Calling .GetState
	I0913 18:44:51.026783   26376 status.go:330] ha-602459-m02 host status = "Stopped" (err=<nil>)
	I0913 18:44:51.026796   26376 status.go:343] host is not running, skipping remaining checks
	I0913 18:44:51.026802   26376 status.go:257] ha-602459-m02 status: &{Name:ha-602459-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0913 18:44:51.026821   26376 status.go:255] checking status of ha-602459-m03 ...
	I0913 18:44:51.027092   26376 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0913 18:44:51.027125   26376 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:44:51.043365   26376 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45371
	I0913 18:44:51.043829   26376 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:44:51.044279   26376 main.go:141] libmachine: Using API Version  1
	I0913 18:44:51.044302   26376 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:44:51.044603   26376 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:44:51.044757   26376 main.go:141] libmachine: (ha-602459-m03) Calling .GetState
	I0913 18:44:51.046281   26376 status.go:330] ha-602459-m03 host status = "Running" (err=<nil>)
	I0913 18:44:51.046295   26376 host.go:66] Checking if "ha-602459-m03" exists ...
	I0913 18:44:51.046610   26376 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0913 18:44:51.046650   26376 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:44:51.061627   26376 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34497
	I0913 18:44:51.062086   26376 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:44:51.062601   26376 main.go:141] libmachine: Using API Version  1
	I0913 18:44:51.062622   26376 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:44:51.062892   26376 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:44:51.063051   26376 main.go:141] libmachine: (ha-602459-m03) Calling .GetIP
	I0913 18:44:51.066115   26376 main.go:141] libmachine: (ha-602459-m03) DBG | domain ha-602459-m03 has defined MAC address 52:54:00:db:6d:a4 in network mk-ha-602459
	I0913 18:44:51.066613   26376 main.go:141] libmachine: (ha-602459-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:6d:a4", ip: ""} in network mk-ha-602459: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:05 +0000 UTC Type:0 Mac:52:54:00:db:6d:a4 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-602459-m03 Clientid:01:52:54:00:db:6d:a4}
	I0913 18:44:51.066647   26376 main.go:141] libmachine: (ha-602459-m03) DBG | domain ha-602459-m03 has defined IP address 192.168.39.154 and MAC address 52:54:00:db:6d:a4 in network mk-ha-602459
	I0913 18:44:51.066901   26376 host.go:66] Checking if "ha-602459-m03" exists ...
	I0913 18:44:51.067200   26376 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0913 18:44:51.067236   26376 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:44:51.081789   26376 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43855
	I0913 18:44:51.082230   26376 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:44:51.082703   26376 main.go:141] libmachine: Using API Version  1
	I0913 18:44:51.082725   26376 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:44:51.083030   26376 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:44:51.083216   26376 main.go:141] libmachine: (ha-602459-m03) Calling .DriverName
	I0913 18:44:51.083403   26376 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0913 18:44:51.083426   26376 main.go:141] libmachine: (ha-602459-m03) Calling .GetSSHHostname
	I0913 18:44:51.086055   26376 main.go:141] libmachine: (ha-602459-m03) DBG | domain ha-602459-m03 has defined MAC address 52:54:00:db:6d:a4 in network mk-ha-602459
	I0913 18:44:51.086511   26376 main.go:141] libmachine: (ha-602459-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:6d:a4", ip: ""} in network mk-ha-602459: {Iface:virbr1 ExpiryTime:2024-09-13 19:42:05 +0000 UTC Type:0 Mac:52:54:00:db:6d:a4 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-602459-m03 Clientid:01:52:54:00:db:6d:a4}
	I0913 18:44:51.086539   26376 main.go:141] libmachine: (ha-602459-m03) DBG | domain ha-602459-m03 has defined IP address 192.168.39.154 and MAC address 52:54:00:db:6d:a4 in network mk-ha-602459
	I0913 18:44:51.086796   26376 main.go:141] libmachine: (ha-602459-m03) Calling .GetSSHPort
	I0913 18:44:51.086990   26376 main.go:141] libmachine: (ha-602459-m03) Calling .GetSSHKeyPath
	I0913 18:44:51.087133   26376 main.go:141] libmachine: (ha-602459-m03) Calling .GetSSHUsername
	I0913 18:44:51.087258   26376 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3886/.minikube/machines/ha-602459-m03/id_rsa Username:docker}
	I0913 18:44:51.166125   26376 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 18:44:51.182599   26376 kubeconfig.go:125] found "ha-602459" server: "https://192.168.39.254:8443"
	I0913 18:44:51.182634   26376 api_server.go:166] Checking apiserver status ...
	I0913 18:44:51.182675   26376 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 18:44:51.198368   26376 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1798/cgroup
	W0913 18:44:51.210400   26376 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1798/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0913 18:44:51.210462   26376 ssh_runner.go:195] Run: ls
	I0913 18:44:51.215129   26376 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0913 18:44:51.219364   26376 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0913 18:44:51.219388   26376 status.go:422] ha-602459-m03 apiserver status = Running (err=<nil>)
	I0913 18:44:51.219396   26376 status.go:257] ha-602459-m03 status: &{Name:ha-602459-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0913 18:44:51.219411   26376 status.go:255] checking status of ha-602459-m04 ...
	I0913 18:44:51.219707   26376 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0913 18:44:51.219744   26376 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:44:51.234819   26376 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43253
	I0913 18:44:51.235284   26376 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:44:51.235753   26376 main.go:141] libmachine: Using API Version  1
	I0913 18:44:51.235781   26376 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:44:51.236130   26376 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:44:51.236295   26376 main.go:141] libmachine: (ha-602459-m04) Calling .GetState
	I0913 18:44:51.237902   26376 status.go:330] ha-602459-m04 host status = "Running" (err=<nil>)
	I0913 18:44:51.237918   26376 host.go:66] Checking if "ha-602459-m04" exists ...
	I0913 18:44:51.238189   26376 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0913 18:44:51.238231   26376 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:44:51.253191   26376 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37127
	I0913 18:44:51.253651   26376 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:44:51.254137   26376 main.go:141] libmachine: Using API Version  1
	I0913 18:44:51.254162   26376 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:44:51.254491   26376 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:44:51.254689   26376 main.go:141] libmachine: (ha-602459-m04) Calling .GetIP
	I0913 18:44:51.257830   26376 main.go:141] libmachine: (ha-602459-m04) DBG | domain ha-602459-m04 has defined MAC address 52:54:00:6e:2d:7b in network mk-ha-602459
	I0913 18:44:51.258279   26376 main.go:141] libmachine: (ha-602459-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:2d:7b", ip: ""} in network mk-ha-602459: {Iface:virbr1 ExpiryTime:2024-09-13 19:43:33 +0000 UTC Type:0 Mac:52:54:00:6e:2d:7b Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-602459-m04 Clientid:01:52:54:00:6e:2d:7b}
	I0913 18:44:51.258298   26376 main.go:141] libmachine: (ha-602459-m04) DBG | domain ha-602459-m04 has defined IP address 192.168.39.160 and MAC address 52:54:00:6e:2d:7b in network mk-ha-602459
	I0913 18:44:51.258480   26376 host.go:66] Checking if "ha-602459-m04" exists ...
	I0913 18:44:51.258889   26376 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0913 18:44:51.258939   26376 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:44:51.275615   26376 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39475
	I0913 18:44:51.276112   26376 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:44:51.276593   26376 main.go:141] libmachine: Using API Version  1
	I0913 18:44:51.276617   26376 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:44:51.276933   26376 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:44:51.277136   26376 main.go:141] libmachine: (ha-602459-m04) Calling .DriverName
	I0913 18:44:51.277303   26376 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0913 18:44:51.277324   26376 main.go:141] libmachine: (ha-602459-m04) Calling .GetSSHHostname
	I0913 18:44:51.280272   26376 main.go:141] libmachine: (ha-602459-m04) DBG | domain ha-602459-m04 has defined MAC address 52:54:00:6e:2d:7b in network mk-ha-602459
	I0913 18:44:51.280712   26376 main.go:141] libmachine: (ha-602459-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:2d:7b", ip: ""} in network mk-ha-602459: {Iface:virbr1 ExpiryTime:2024-09-13 19:43:33 +0000 UTC Type:0 Mac:52:54:00:6e:2d:7b Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-602459-m04 Clientid:01:52:54:00:6e:2d:7b}
	I0913 18:44:51.280753   26376 main.go:141] libmachine: (ha-602459-m04) DBG | domain ha-602459-m04 has defined IP address 192.168.39.160 and MAC address 52:54:00:6e:2d:7b in network mk-ha-602459
	I0913 18:44:51.280898   26376 main.go:141] libmachine: (ha-602459-m04) Calling .GetSSHPort
	I0913 18:44:51.281073   26376 main.go:141] libmachine: (ha-602459-m04) Calling .GetSSHKeyPath
	I0913 18:44:51.281189   26376 main.go:141] libmachine: (ha-602459-m04) Calling .GetSSHUsername
	I0913 18:44:51.281310   26376 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3886/.minikube/machines/ha-602459-m04/id_rsa Username:docker}
	I0913 18:44:51.362214   26376 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 18:44:51.377018   26376 status.go:257] ha-602459-m04 status: &{Name:ha-602459-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (47.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-602459 node start m02 -v=7 --alsologtostderr
E0913 18:44:51.807610   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/addons-084503/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:45:09.788067   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/functional-988520/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p ha-602459 node start m02 -v=7 --alsologtostderr: (46.570817856s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-602459 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (47.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (260.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-602459 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-602459 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-linux-amd64 stop -p ha-602459 -v=7 --alsologtostderr: (41.582691026s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-602459 --wait=true -v=7 --alsologtostderr
E0913 18:46:31.710057   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/functional-988520/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:48:47.848695   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/functional-988520/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:49:15.551489   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/functional-988520/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:49:24.102525   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/addons-084503/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-602459 --wait=true -v=7 --alsologtostderr: (3m39.133229668s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-602459
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (260.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (7.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-602459 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-602459 node delete m03 -v=7 --alsologtostderr: (6.328888935s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-602459 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (7.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (38.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-602459 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-amd64 -p ha-602459 stop -v=7 --alsologtostderr: (38.239871415s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-602459 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-602459 status -v=7 --alsologtostderr: exit status 7 (101.029706ms)

                                                
                                                
-- stdout --
	ha-602459
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-602459-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-602459-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 18:50:46.395084   28787 out.go:345] Setting OutFile to fd 1 ...
	I0913 18:50:46.395217   28787 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:50:46.395226   28787 out.go:358] Setting ErrFile to fd 2...
	I0913 18:50:46.395230   28787 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:50:46.395396   28787 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-3886/.minikube/bin
	I0913 18:50:46.395555   28787 out.go:352] Setting JSON to false
	I0913 18:50:46.395586   28787 mustload.go:65] Loading cluster: ha-602459
	I0913 18:50:46.395635   28787 notify.go:220] Checking for updates...
	I0913 18:50:46.396162   28787 config.go:182] Loaded profile config "ha-602459": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 18:50:46.396183   28787 status.go:255] checking status of ha-602459 ...
	I0913 18:50:46.396656   28787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0913 18:50:46.396705   28787 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:50:46.414941   28787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44153
	I0913 18:50:46.415464   28787 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:50:46.416105   28787 main.go:141] libmachine: Using API Version  1
	I0913 18:50:46.416132   28787 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:50:46.416577   28787 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:50:46.416765   28787 main.go:141] libmachine: (ha-602459) Calling .GetState
	I0913 18:50:46.418590   28787 status.go:330] ha-602459 host status = "Stopped" (err=<nil>)
	I0913 18:50:46.418606   28787 status.go:343] host is not running, skipping remaining checks
	I0913 18:50:46.418614   28787 status.go:257] ha-602459 status: &{Name:ha-602459 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0913 18:50:46.418646   28787 status.go:255] checking status of ha-602459-m02 ...
	I0913 18:50:46.418957   28787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0913 18:50:46.419001   28787 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:50:46.433472   28787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36313
	I0913 18:50:46.433860   28787 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:50:46.434322   28787 main.go:141] libmachine: Using API Version  1
	I0913 18:50:46.434358   28787 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:50:46.434664   28787 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:50:46.434858   28787 main.go:141] libmachine: (ha-602459-m02) Calling .GetState
	I0913 18:50:46.436391   28787 status.go:330] ha-602459-m02 host status = "Stopped" (err=<nil>)
	I0913 18:50:46.436403   28787 status.go:343] host is not running, skipping remaining checks
	I0913 18:50:46.436408   28787 status.go:257] ha-602459-m02 status: &{Name:ha-602459-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0913 18:50:46.436446   28787 status.go:255] checking status of ha-602459-m04 ...
	I0913 18:50:46.436737   28787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0913 18:50:46.436769   28787 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 18:50:46.451171   28787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45017
	I0913 18:50:46.451614   28787 main.go:141] libmachine: () Calling .GetVersion
	I0913 18:50:46.452103   28787 main.go:141] libmachine: Using API Version  1
	I0913 18:50:46.452122   28787 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 18:50:46.452469   28787 main.go:141] libmachine: () Calling .GetMachineName
	I0913 18:50:46.452634   28787 main.go:141] libmachine: (ha-602459-m04) Calling .GetState
	I0913 18:50:46.454163   28787 status.go:330] ha-602459-m04 host status = "Stopped" (err=<nil>)
	I0913 18:50:46.454183   28787 status.go:343] host is not running, skipping remaining checks
	I0913 18:50:46.454188   28787 status.go:257] ha-602459-m04 status: &{Name:ha-602459-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (38.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (162.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-602459 --wait=true -v=7 --alsologtostderr --driver=kvm2 
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-602459 --wait=true -v=7 --alsologtostderr --driver=kvm2 : (2m41.881296354s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-602459 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (162.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (81.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-602459 --control-plane -v=7 --alsologtostderr
E0913 18:53:47.848968   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/functional-988520/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:54:24.102415   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/addons-084503/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-602459 --control-plane -v=7 --alsologtostderr: (1m20.696007986s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-602459 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (81.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.54s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (51.5s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-299145 --driver=kvm2 
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-299145 --driver=kvm2 : (51.502111641s)
--- PASS: TestImageBuild/serial/Setup (51.50s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (2.32s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-299145
E0913 18:55:47.168846   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/addons-084503/client.crt: no such file or directory" logger="UnhandledError"
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-299145: (2.323448806s)
--- PASS: TestImageBuild/serial/NormalBuild (2.32s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.14s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-299145
image_test.go:99: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-299145: (1.137056112s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.14s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (1.05s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-299145
image_test.go:133: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-299145: (1.045701879s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (1.05s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.86s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-299145
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.86s)

                                                
                                    
x
+
TestJSONOutput/start/Command (91.29s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-160420 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-160420 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 : (1m31.290805518s)
--- PASS: TestJSONOutput/start/Command (91.29s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.56s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-160420 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.56s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.53s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-160420 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.53s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (12.66s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-160420 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-160420 --output=json --user=testUser: (12.660889998s)
--- PASS: TestJSONOutput/stop/Command (12.66s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-195210 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-195210 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (59.847715ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0e5afed4-0d6f-4a9c-af38-6486cf3daebf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-195210] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"668dbe58-a8cb-414a-a859-50cc455a06ef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19636"}}
	{"specversion":"1.0","id":"e4462233-a8bd-4428-901f-70ce9b33afef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"56663bca-e2c7-4073-856d-2c5ac9e278d0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19636-3886/kubeconfig"}}
	{"specversion":"1.0","id":"de49d6a7-2d5b-4905-b633-3dc96169eccf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-3886/.minikube"}}
	{"specversion":"1.0","id":"289e3233-9aa2-44bd-87b7-a281c996ac08","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"7d5b0ada-6201-4d90-8ec4-64bcecb0f44a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"6a26e73e-8fc6-4271-852e-dea5254d14e2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-195210" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-195210
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (104.39s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-240981 --driver=kvm2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-240981 --driver=kvm2 : (51.465714036s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-251967 --driver=kvm2 
E0913 18:58:47.849011   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/functional-988520/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-251967 --driver=kvm2 : (50.289197585s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-240981
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-251967
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-251967" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-251967
helpers_test.go:175: Cleaning up "first-240981" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-240981
--- PASS: TestMinikubeProfile (104.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (32.54s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-711116 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 
E0913 18:59:24.102276   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/addons-084503/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-711116 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 : (31.540312309s)
--- PASS: TestMountStart/serial/StartWithMountFirst (32.54s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-711116 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-711116 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (30.81s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-726991 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 
E0913 19:00:10.915748   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/functional-988520/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-726991 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 : (29.804851847s)
--- PASS: TestMountStart/serial/StartWithMountSecond (30.81s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-726991 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-726991 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-711116 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-726991 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-726991 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                    
x
+
TestMountStart/serial/Stop (3.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-726991
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-726991: (3.283832759s)
--- PASS: TestMountStart/serial/Stop (3.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (26.22s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-726991
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-726991: (25.21592864s)
--- PASS: TestMountStart/serial/RestartStopped (26.22s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-726991 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-726991 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (125.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-434318 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-434318 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 : (2m5.113239605s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-434318 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (125.54s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-434318 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-434318 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-434318 -- rollout status deployment/busybox: (2.635301121s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-434318 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-434318 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-434318 -- exec busybox-7dff88458-27twf -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-434318 -- exec busybox-7dff88458-flwvv -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-434318 -- exec busybox-7dff88458-27twf -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-434318 -- exec busybox-7dff88458-flwvv -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-434318 -- exec busybox-7dff88458-27twf -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-434318 -- exec busybox-7dff88458-flwvv -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.27s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-434318 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-434318 -- exec busybox-7dff88458-27twf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-434318 -- exec busybox-7dff88458-27twf -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-434318 -- exec busybox-7dff88458-flwvv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-434318 -- exec busybox-7dff88458-flwvv -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.84s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (61.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-434318 -v 3 --alsologtostderr
E0913 19:03:47.848190   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/functional-988520/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-434318 -v 3 --alsologtostderr: (1m1.401820792s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-434318 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (61.96s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-434318 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.22s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-434318 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-434318 cp testdata/cp-test.txt multinode-434318:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-434318 ssh -n multinode-434318 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-434318 cp multinode-434318:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile86816651/001/cp-test_multinode-434318.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-434318 ssh -n multinode-434318 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-434318 cp multinode-434318:/home/docker/cp-test.txt multinode-434318-m02:/home/docker/cp-test_multinode-434318_multinode-434318-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-434318 ssh -n multinode-434318 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-434318 ssh -n multinode-434318-m02 "sudo cat /home/docker/cp-test_multinode-434318_multinode-434318-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-434318 cp multinode-434318:/home/docker/cp-test.txt multinode-434318-m03:/home/docker/cp-test_multinode-434318_multinode-434318-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-434318 ssh -n multinode-434318 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-434318 ssh -n multinode-434318-m03 "sudo cat /home/docker/cp-test_multinode-434318_multinode-434318-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-434318 cp testdata/cp-test.txt multinode-434318-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-434318 ssh -n multinode-434318-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-434318 cp multinode-434318-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile86816651/001/cp-test_multinode-434318-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-434318 ssh -n multinode-434318-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-434318 cp multinode-434318-m02:/home/docker/cp-test.txt multinode-434318:/home/docker/cp-test_multinode-434318-m02_multinode-434318.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-434318 ssh -n multinode-434318-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-434318 ssh -n multinode-434318 "sudo cat /home/docker/cp-test_multinode-434318-m02_multinode-434318.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-434318 cp multinode-434318-m02:/home/docker/cp-test.txt multinode-434318-m03:/home/docker/cp-test_multinode-434318-m02_multinode-434318-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-434318 ssh -n multinode-434318-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-434318 ssh -n multinode-434318-m03 "sudo cat /home/docker/cp-test_multinode-434318-m02_multinode-434318-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-434318 cp testdata/cp-test.txt multinode-434318-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-434318 ssh -n multinode-434318-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-434318 cp multinode-434318-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile86816651/001/cp-test_multinode-434318-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-434318 ssh -n multinode-434318-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-434318 cp multinode-434318-m03:/home/docker/cp-test.txt multinode-434318:/home/docker/cp-test_multinode-434318-m03_multinode-434318.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-434318 ssh -n multinode-434318-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-434318 ssh -n multinode-434318 "sudo cat /home/docker/cp-test_multinode-434318-m03_multinode-434318.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-434318 cp multinode-434318-m03:/home/docker/cp-test.txt multinode-434318-m02:/home/docker/cp-test_multinode-434318-m03_multinode-434318-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-434318 ssh -n multinode-434318-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-434318 ssh -n multinode-434318-m02 "sudo cat /home/docker/cp-test_multinode-434318-m03_multinode-434318-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.11s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-434318 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-434318 node stop m03: (2.480537274s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-434318 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-434318 status: exit status 7 (424.361456ms)

                                                
                                                
-- stdout --
	multinode-434318
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-434318-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-434318-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-434318 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-434318 status --alsologtostderr: exit status 7 (432.580045ms)

                                                
                                                
-- stdout --
	multinode-434318
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-434318-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-434318-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 19:04:21.744101   37257 out.go:345] Setting OutFile to fd 1 ...
	I0913 19:04:21.744350   37257 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 19:04:21.744360   37257 out.go:358] Setting ErrFile to fd 2...
	I0913 19:04:21.744364   37257 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 19:04:21.744543   37257 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-3886/.minikube/bin
	I0913 19:04:21.744707   37257 out.go:352] Setting JSON to false
	I0913 19:04:21.744736   37257 mustload.go:65] Loading cluster: multinode-434318
	I0913 19:04:21.744787   37257 notify.go:220] Checking for updates...
	I0913 19:04:21.745303   37257 config.go:182] Loaded profile config "multinode-434318": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 19:04:21.745324   37257 status.go:255] checking status of multinode-434318 ...
	I0913 19:04:21.745850   37257 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0913 19:04:21.745887   37257 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:04:21.763840   37257 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40745
	I0913 19:04:21.764379   37257 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:04:21.765078   37257 main.go:141] libmachine: Using API Version  1
	I0913 19:04:21.765108   37257 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:04:21.765410   37257 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:04:21.765569   37257 main.go:141] libmachine: (multinode-434318) Calling .GetState
	I0913 19:04:21.767229   37257 status.go:330] multinode-434318 host status = "Running" (err=<nil>)
	I0913 19:04:21.767243   37257 host.go:66] Checking if "multinode-434318" exists ...
	I0913 19:04:21.767548   37257 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0913 19:04:21.767589   37257 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:04:21.782680   37257 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37595
	I0913 19:04:21.783147   37257 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:04:21.783675   37257 main.go:141] libmachine: Using API Version  1
	I0913 19:04:21.783692   37257 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:04:21.784014   37257 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:04:21.784221   37257 main.go:141] libmachine: (multinode-434318) Calling .GetIP
	I0913 19:04:21.787081   37257 main.go:141] libmachine: (multinode-434318) DBG | domain multinode-434318 has defined MAC address 52:54:00:d7:ee:9c in network mk-multinode-434318
	I0913 19:04:21.787549   37257 main.go:141] libmachine: (multinode-434318) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:ee:9c", ip: ""} in network mk-multinode-434318: {Iface:virbr1 ExpiryTime:2024-09-13 20:01:13 +0000 UTC Type:0 Mac:52:54:00:d7:ee:9c Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:multinode-434318 Clientid:01:52:54:00:d7:ee:9c}
	I0913 19:04:21.787580   37257 main.go:141] libmachine: (multinode-434318) DBG | domain multinode-434318 has defined IP address 192.168.39.5 and MAC address 52:54:00:d7:ee:9c in network mk-multinode-434318
	I0913 19:04:21.787727   37257 host.go:66] Checking if "multinode-434318" exists ...
	I0913 19:04:21.788056   37257 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0913 19:04:21.788101   37257 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:04:21.804232   37257 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39289
	I0913 19:04:21.804686   37257 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:04:21.805118   37257 main.go:141] libmachine: Using API Version  1
	I0913 19:04:21.805141   37257 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:04:21.805556   37257 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:04:21.805794   37257 main.go:141] libmachine: (multinode-434318) Calling .DriverName
	I0913 19:04:21.805994   37257 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0913 19:04:21.806014   37257 main.go:141] libmachine: (multinode-434318) Calling .GetSSHHostname
	I0913 19:04:21.808772   37257 main.go:141] libmachine: (multinode-434318) DBG | domain multinode-434318 has defined MAC address 52:54:00:d7:ee:9c in network mk-multinode-434318
	I0913 19:04:21.809182   37257 main.go:141] libmachine: (multinode-434318) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:ee:9c", ip: ""} in network mk-multinode-434318: {Iface:virbr1 ExpiryTime:2024-09-13 20:01:13 +0000 UTC Type:0 Mac:52:54:00:d7:ee:9c Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:multinode-434318 Clientid:01:52:54:00:d7:ee:9c}
	I0913 19:04:21.809209   37257 main.go:141] libmachine: (multinode-434318) DBG | domain multinode-434318 has defined IP address 192.168.39.5 and MAC address 52:54:00:d7:ee:9c in network mk-multinode-434318
	I0913 19:04:21.809374   37257 main.go:141] libmachine: (multinode-434318) Calling .GetSSHPort
	I0913 19:04:21.809548   37257 main.go:141] libmachine: (multinode-434318) Calling .GetSSHKeyPath
	I0913 19:04:21.809691   37257 main.go:141] libmachine: (multinode-434318) Calling .GetSSHUsername
	I0913 19:04:21.809803   37257 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3886/.minikube/machines/multinode-434318/id_rsa Username:docker}
	I0913 19:04:21.897380   37257 ssh_runner.go:195] Run: systemctl --version
	I0913 19:04:21.903133   37257 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 19:04:21.924424   37257 kubeconfig.go:125] found "multinode-434318" server: "https://192.168.39.5:8443"
	I0913 19:04:21.924459   37257 api_server.go:166] Checking apiserver status ...
	I0913 19:04:21.924497   37257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:04:21.938258   37257 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1885/cgroup
	W0913 19:04:21.948204   37257 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1885/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0913 19:04:21.948281   37257 ssh_runner.go:195] Run: ls
	I0913 19:04:21.953209   37257 api_server.go:253] Checking apiserver healthz at https://192.168.39.5:8443/healthz ...
	I0913 19:04:21.957387   37257 api_server.go:279] https://192.168.39.5:8443/healthz returned 200:
	ok
	I0913 19:04:21.957424   37257 status.go:422] multinode-434318 apiserver status = Running (err=<nil>)
	I0913 19:04:21.957437   37257 status.go:257] multinode-434318 status: &{Name:multinode-434318 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0913 19:04:21.957463   37257 status.go:255] checking status of multinode-434318-m02 ...
	I0913 19:04:21.957880   37257 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0913 19:04:21.957924   37257 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:04:21.973370   37257 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40735
	I0913 19:04:21.973809   37257 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:04:21.974331   37257 main.go:141] libmachine: Using API Version  1
	I0913 19:04:21.974365   37257 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:04:21.974654   37257 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:04:21.974812   37257 main.go:141] libmachine: (multinode-434318-m02) Calling .GetState
	I0913 19:04:21.976114   37257 status.go:330] multinode-434318-m02 host status = "Running" (err=<nil>)
	I0913 19:04:21.976132   37257 host.go:66] Checking if "multinode-434318-m02" exists ...
	I0913 19:04:21.976523   37257 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0913 19:04:21.976571   37257 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:04:21.991694   37257 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41837
	I0913 19:04:21.992184   37257 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:04:21.992578   37257 main.go:141] libmachine: Using API Version  1
	I0913 19:04:21.992597   37257 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:04:21.992945   37257 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:04:21.993121   37257 main.go:141] libmachine: (multinode-434318-m02) Calling .GetIP
	I0913 19:04:21.995767   37257 main.go:141] libmachine: (multinode-434318-m02) DBG | domain multinode-434318-m02 has defined MAC address 52:54:00:d4:37:77 in network mk-multinode-434318
	I0913 19:04:21.996125   37257 main.go:141] libmachine: (multinode-434318-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:37:77", ip: ""} in network mk-multinode-434318: {Iface:virbr1 ExpiryTime:2024-09-13 20:02:23 +0000 UTC Type:0 Mac:52:54:00:d4:37:77 Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:multinode-434318-m02 Clientid:01:52:54:00:d4:37:77}
	I0913 19:04:21.996151   37257 main.go:141] libmachine: (multinode-434318-m02) DBG | domain multinode-434318-m02 has defined IP address 192.168.39.64 and MAC address 52:54:00:d4:37:77 in network mk-multinode-434318
	I0913 19:04:21.996296   37257 host.go:66] Checking if "multinode-434318-m02" exists ...
	I0913 19:04:21.996642   37257 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0913 19:04:21.996683   37257 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:04:22.011780   37257 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41215
	I0913 19:04:22.012279   37257 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:04:22.012811   37257 main.go:141] libmachine: Using API Version  1
	I0913 19:04:22.012829   37257 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:04:22.013125   37257 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:04:22.013292   37257 main.go:141] libmachine: (multinode-434318-m02) Calling .DriverName
	I0913 19:04:22.013460   37257 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0913 19:04:22.013478   37257 main.go:141] libmachine: (multinode-434318-m02) Calling .GetSSHHostname
	I0913 19:04:22.016454   37257 main.go:141] libmachine: (multinode-434318-m02) DBG | domain multinode-434318-m02 has defined MAC address 52:54:00:d4:37:77 in network mk-multinode-434318
	I0913 19:04:22.016845   37257 main.go:141] libmachine: (multinode-434318-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:37:77", ip: ""} in network mk-multinode-434318: {Iface:virbr1 ExpiryTime:2024-09-13 20:02:23 +0000 UTC Type:0 Mac:52:54:00:d4:37:77 Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:multinode-434318-m02 Clientid:01:52:54:00:d4:37:77}
	I0913 19:04:22.016868   37257 main.go:141] libmachine: (multinode-434318-m02) DBG | domain multinode-434318-m02 has defined IP address 192.168.39.64 and MAC address 52:54:00:d4:37:77 in network mk-multinode-434318
	I0913 19:04:22.017027   37257 main.go:141] libmachine: (multinode-434318-m02) Calling .GetSSHPort
	I0913 19:04:22.017189   37257 main.go:141] libmachine: (multinode-434318-m02) Calling .GetSSHKeyPath
	I0913 19:04:22.017331   37257 main.go:141] libmachine: (multinode-434318-m02) Calling .GetSSHUsername
	I0913 19:04:22.017466   37257 sshutil.go:53] new ssh client: &{IP:192.168.39.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19636-3886/.minikube/machines/multinode-434318-m02/id_rsa Username:docker}
	I0913 19:04:22.101195   37257 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 19:04:22.116005   37257 status.go:257] multinode-434318-m02 status: &{Name:multinode-434318-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0913 19:04:22.116040   37257 status.go:255] checking status of multinode-434318-m03 ...
	I0913 19:04:22.116400   37257 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0913 19:04:22.116446   37257 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:04:22.131830   37257 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35231
	I0913 19:04:22.132326   37257 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:04:22.132871   37257 main.go:141] libmachine: Using API Version  1
	I0913 19:04:22.132892   37257 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:04:22.133215   37257 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:04:22.133368   37257 main.go:141] libmachine: (multinode-434318-m03) Calling .GetState
	I0913 19:04:22.134861   37257 status.go:330] multinode-434318-m03 host status = "Stopped" (err=<nil>)
	I0913 19:04:22.134878   37257 status.go:343] host is not running, skipping remaining checks
	I0913 19:04:22.134886   37257 status.go:257] multinode-434318-m03 status: &{Name:multinode-434318-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.34s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (42.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-434318 node start m03 -v=7 --alsologtostderr
E0913 19:04:24.102272   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/addons-084503/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-434318 node start m03 -v=7 --alsologtostderr: (41.472363481s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-434318 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (42.17s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (174.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-434318
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-434318
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-434318: (28.156209644s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-434318 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-434318 --wait=true -v=8 --alsologtostderr: (2m25.823511128s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-434318
--- PASS: TestMultiNode/serial/RestartKeepsNodes (174.08s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-434318 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-434318 node delete m03: (1.786647584s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-434318 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.35s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (25.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-434318 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-434318 stop: (24.929446759s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-434318 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-434318 status: exit status 7 (85.28856ms)

                                                
                                                
-- stdout --
	multinode-434318
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-434318-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-434318 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-434318 status --alsologtostderr: exit status 7 (83.889049ms)

                                                
                                                
-- stdout --
	multinode-434318
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-434318-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 19:08:25.782583   39001 out.go:345] Setting OutFile to fd 1 ...
	I0913 19:08:25.782705   39001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 19:08:25.782713   39001 out.go:358] Setting ErrFile to fd 2...
	I0913 19:08:25.782717   39001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 19:08:25.782890   39001 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-3886/.minikube/bin
	I0913 19:08:25.783045   39001 out.go:352] Setting JSON to false
	I0913 19:08:25.783071   39001 mustload.go:65] Loading cluster: multinode-434318
	I0913 19:08:25.783132   39001 notify.go:220] Checking for updates...
	I0913 19:08:25.783523   39001 config.go:182] Loaded profile config "multinode-434318": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 19:08:25.783544   39001 status.go:255] checking status of multinode-434318 ...
	I0913 19:08:25.784071   39001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0913 19:08:25.784113   39001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:08:25.799216   39001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34159
	I0913 19:08:25.799705   39001 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:08:25.800403   39001 main.go:141] libmachine: Using API Version  1
	I0913 19:08:25.800453   39001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:08:25.800873   39001 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:08:25.801059   39001 main.go:141] libmachine: (multinode-434318) Calling .GetState
	I0913 19:08:25.803359   39001 status.go:330] multinode-434318 host status = "Stopped" (err=<nil>)
	I0913 19:08:25.803387   39001 status.go:343] host is not running, skipping remaining checks
	I0913 19:08:25.803395   39001 status.go:257] multinode-434318 status: &{Name:multinode-434318 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0913 19:08:25.803442   39001 status.go:255] checking status of multinode-434318-m02 ...
	I0913 19:08:25.803766   39001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0913 19:08:25.803815   39001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0913 19:08:25.819057   39001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40281
	I0913 19:08:25.819515   39001 main.go:141] libmachine: () Calling .GetVersion
	I0913 19:08:25.819997   39001 main.go:141] libmachine: Using API Version  1
	I0913 19:08:25.820017   39001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0913 19:08:25.820305   39001 main.go:141] libmachine: () Calling .GetMachineName
	I0913 19:08:25.820517   39001 main.go:141] libmachine: (multinode-434318-m02) Calling .GetState
	I0913 19:08:25.822246   39001 status.go:330] multinode-434318-m02 host status = "Stopped" (err=<nil>)
	I0913 19:08:25.822261   39001 status.go:343] host is not running, skipping remaining checks
	I0913 19:08:25.822267   39001 status.go:257] multinode-434318-m02 status: &{Name:multinode-434318-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (25.10s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (118.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-434318 --wait=true -v=8 --alsologtostderr --driver=kvm2 
E0913 19:08:47.848507   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/functional-988520/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:09:24.102301   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/addons-084503/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-434318 --wait=true -v=8 --alsologtostderr --driver=kvm2 : (1m57.616980528s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-434318 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (118.13s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (52.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-434318
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-434318-m02 --driver=kvm2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-434318-m02 --driver=kvm2 : exit status 14 (62.372998ms)

                                                
                                                
-- stdout --
	* [multinode-434318-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19636
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19636-3886/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-3886/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-434318-m02' is duplicated with machine name 'multinode-434318-m02' in profile 'multinode-434318'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-434318-m03 --driver=kvm2 
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-434318-m03 --driver=kvm2 : (51.241141356s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-434318
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-434318: exit status 80 (201.442511ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-434318 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-434318-m03 already exists in multinode-434318-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-434318-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (52.54s)

                                                
                                    
x
+
TestPreload (184.28s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-023170 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4
E0913 19:12:27.170631   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/addons-084503/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-023170 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4: (1m57.518320952s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-023170 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-023170 image pull gcr.io/k8s-minikube/busybox: (1.467055613s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-023170
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-023170: (12.538071411s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-023170 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 
E0913 19:13:47.848771   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/functional-988520/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-023170 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 : (51.735201003s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-023170 image list
helpers_test.go:175: Cleaning up "test-preload-023170" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-023170
--- PASS: TestPreload (184.28s)

                                                
                                    
x
+
TestScheduledStopUnix (119.44s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-167536 --memory=2048 --driver=kvm2 
E0913 19:14:24.102492   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/addons-084503/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-167536 --memory=2048 --driver=kvm2 : (47.891161962s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-167536 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-167536 -n scheduled-stop-167536
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-167536 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-167536 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-167536 -n scheduled-stop-167536
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-167536
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-167536 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-167536
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-167536: exit status 7 (64.424546ms)

                                                
                                                
-- stdout --
	scheduled-stop-167536
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-167536 -n scheduled-stop-167536
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-167536 -n scheduled-stop-167536: exit status 7 (64.692743ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-167536" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-167536
--- PASS: TestScheduledStopUnix (119.44s)

                                                
                                    
x
+
TestSkaffold (134.15s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe717527961 version
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-714540 --memory=2600 --driver=kvm2 
E0913 19:16:50.920250   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/functional-988520/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-714540 --memory=2600 --driver=kvm2 : (50.184375213s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/KVM_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe717527961 run --minikube-profile skaffold-714540 --kube-context skaffold-714540 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe717527961 run --minikube-profile skaffold-714540 --kube-context skaffold-714540 --status-check=true --port-forward=false --interactive=false: (1m10.981821452s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-6d8c8c744f-77j2q" [3a00edde-daf8-4288-8e0a-e5ed54442cf5] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.004491111s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-9df6d7754-9k7ht" [43970c85-c1cc-44c7-b6d2-9b3e9922d258] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.005056995s
helpers_test.go:175: Cleaning up "skaffold-714540" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-714540
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-714540: (1.209676486s)
--- PASS: TestSkaffold (134.15s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (203.84s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3428766039 start -p running-upgrade-950639 --memory=2200 --vm-driver=kvm2 
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3428766039 start -p running-upgrade-950639 --memory=2200 --vm-driver=kvm2 : (1m53.328325782s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-950639 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-950639 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (1m28.880270719s)
helpers_test.go:175: Cleaning up "running-upgrade-950639" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-950639
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-950639: (1.151519381s)
--- PASS: TestRunningBinaryUpgrade (203.84s)

                                                
                                    
x
+
TestKubernetesUpgrade (190.03s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-069305 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-069305 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2 : (56.544401902s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-069305
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-069305: (3.317944488s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-069305 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-069305 status --format={{.Host}}: exit status 7 (82.620934ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-069305 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-069305 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2 : (1m29.521373401s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-069305 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-069305 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2 
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-069305 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2 : exit status 106 (88.30918ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-069305] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19636
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19636-3886/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-3886/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-069305
	    minikube start -p kubernetes-upgrade-069305 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0693052 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-069305 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-069305 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-069305 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2 : (39.404367177s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-069305" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-069305
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-069305: (1.014542989s)
--- PASS: TestKubernetesUpgrade (190.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.6s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.60s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (319.28s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3213190511 start -p stopped-upgrade-832363 --memory=2200 --vm-driver=kvm2 
E0913 19:18:47.848953   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/functional-988520/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:19:24.103117   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/addons-084503/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3213190511 start -p stopped-upgrade-832363 --memory=2200 --vm-driver=kvm2 : (2m14.279832144s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3213190511 -p stopped-upgrade-832363 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3213190511 -p stopped-upgrade-832363 stop: (13.179729604s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-832363 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-832363 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (2m51.823077285s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (319.28s)

                                                
                                    
x
+
TestPause/serial/Start (109.76s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-373160 --memory=2048 --install-addons=false --wait=all --driver=kvm2 
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-373160 --memory=2048 --install-addons=false --wait=all --driver=kvm2 : (1m49.758649722s)
--- PASS: TestPause/serial/Start (109.76s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (61.16s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-373160 --alsologtostderr -v=1 --driver=kvm2 
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-373160 --alsologtostderr -v=1 --driver=kvm2 : (1m1.129684503s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (61.16s)

                                                
                                    
x
+
TestPause/serial/Pause (2.18s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-373160 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-373160 --alsologtostderr -v=5: (2.175214426s)
--- PASS: TestPause/serial/Pause (2.18s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.29s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-373160 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-373160 --output=json --layout=cluster: exit status 2 (286.50944ms)

                                                
                                                
-- stdout --
	{"Name":"pause-373160","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-373160","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.29s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.61s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-373160 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.61s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.73s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-373160 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.73s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.05s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-373160 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-373160 --alsologtostderr -v=5: (1.050790374s)
--- PASS: TestPause/serial/DeletePaused (1.05s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.54s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.54s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-832363
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-832363: (2.002127525s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-236815 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-236815 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 : exit status 14 (73.811907ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-236815] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19636
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19636-3886/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-3886/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (83.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-236815 --driver=kvm2 
E0913 19:24:04.860564   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/skaffold-714540/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-236815 --driver=kvm2 : (1m22.863188867s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-236815 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (83.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (45.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-236815 --no-kubernetes --driver=kvm2 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-236815 --no-kubernetes --driver=kvm2 : (43.56759612s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-236815 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-236815 status -o json: exit status 2 (239.657884ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-236815","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-236815
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-236815: (1.285670676s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (45.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (63.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-896861 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-896861 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 : (1m3.956528657s)
--- PASS: TestNetworkPlugins/group/auto/Start (63.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (56.52s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-236815 --no-kubernetes --driver=kvm2 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-236815 --no-kubernetes --driver=kvm2 : (56.524662009s)
--- PASS: TestNoKubernetes/serial/Start (56.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (113.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-896861 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-896861 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 : (1m53.902937088s)
--- PASS: TestNetworkPlugins/group/flannel/Start (113.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (123.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-896861 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-896861 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 : (2m3.435984979s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (123.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-236815 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-236815 "sudo systemctl is-active --quiet service kubelet": exit status 1 (218.093285ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.83s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-236815
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-236815: (2.30083886s)
--- PASS: TestNoKubernetes/serial/Stop (2.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (72.6s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-236815 --driver=kvm2 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-236815 --driver=kvm2 : (1m12.599092909s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (72.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-896861 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-896861 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-59n46" [af83ae41-379d-4c3b-9455-e57ee19df563] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-59n46" [af83ae41-379d-4c3b-9455-e57ee19df563] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004414961s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-896861 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-896861 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-896861 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (101.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-896861 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-896861 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 : (1m41.578116956s)
--- PASS: TestNetworkPlugins/group/bridge/Start (101.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-p2g2w" [592306d0-2b44-4c9c-889f-d14d8a379659] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004396944s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-896861 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-896861 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-hpmdd" [0e94d01e-78ed-4d65-b12b-d0851553c4a6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-hpmdd" [0e94d01e-78ed-4d65-b12b-d0851553c4a6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.005792569s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-236815 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-236815 "sudo systemctl is-active --quiet service kubelet": exit status 1 (235.136306ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (122.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-896861 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-896861 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 : (2m2.584748255s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (122.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-896861 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-896861 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-896861 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-896861 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-896861 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-zh2ln" [47df58d9-50d4-4097-8193-31feba8ef9ff] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-zh2ln" [47df58d9-50d4-4097-8193-31feba8ef9ff] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.012466175s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (21.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-896861 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context enable-default-cni-896861 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.183149881s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-896861 exec deployment/netcat -- nslookup kubernetes.default
E0913 19:29:07.172300   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/addons-084503/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:175: (dbg) Done: kubectl --context enable-default-cni-896861 exec deployment/netcat -- nslookup kubernetes.default: (5.161032269s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (21.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (126.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-896861 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 
E0913 19:28:47.849206   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/functional-988520/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:28:51.586207   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/skaffold-714540/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-896861 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 : (2m6.336324228s)
--- PASS: TestNetworkPlugins/group/calico/Start (126.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-896861 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-896861 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-896861 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-896861 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-rkgsg" [0f31828e-cfde-4e44-9675-239b651d30ad] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0913 19:29:24.102221   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/addons-084503/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-rkgsg" [0f31828e-cfde-4e44-9675-239b651d30ad] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.004230082s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (101.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-896861 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-896861 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 : (1m41.263702117s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (101.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-896861 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-896861 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-896861 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (89.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-896861 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 
E0913 19:29:55.252137   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/gvisor-419875/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:30:05.494326   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/gvisor-419875/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:30:25.976184   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/gvisor-419875/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-896861 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 : (1m29.623925482s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (89.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-896861 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (11.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-896861 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-5xkrq" [dbb7627a-da97-4ce0-b909-cc94b23f29d5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-5xkrq" [dbb7627a-da97-4ce0-b909-cc94b23f29d5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 11.004805074s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (11.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-896861 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-896861 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-896861 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-4bj2g" [81f9ecbc-22dc-424f-96ce-e09e3f826a01] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.007721265s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (101.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-896861 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-896861 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 : (1m41.777642936s)
--- PASS: TestNetworkPlugins/group/false/Start (101.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-896861 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-896861 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-46hgs" [3e123db4-bce0-4e07-8e78-4451fc9f3ec8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-46hgs" [3e123db4-bce0-4e07-8e78-4451fc9f3ec8] Running
E0913 19:31:06.937846   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/gvisor-419875/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.005859674s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-pdlfc" [462a3111-e4b9-40ae-a2d7-947d9f3458ab] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004859514s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-896861 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-896861 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-896861 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-896861 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-896861 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-vpft6" [0cad19ed-5440-4830-ad4c-d186b44611c9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-vpft6" [0cad19ed-5440-4830-ad4c-d186b44611c9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.009539924s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-896861 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-896861 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-h7nwm" [2b96a49a-4769-4f55-a024-3b999d95ad3f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-h7nwm" [2b96a49a-4769-4f55-a024-3b999d95ad3f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.008207368s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-896861 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-896861 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-896861 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (170.82s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-716425 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-716425 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0: (2m50.81836223s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (170.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-896861 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-896861 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-896861 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (93.43s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-729851 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-729851 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.31.1: (1m33.43044226s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (93.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (102.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-632198 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.31.1
E0913 19:32:12.647466   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/auto-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:32:12.654032   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/auto-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:32:12.665526   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/auto-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:32:12.686999   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/auto-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:32:12.728504   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/auto-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:32:12.810062   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/auto-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:32:12.971608   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/auto-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:32:13.293933   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/auto-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:32:13.935620   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/auto-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:32:15.217230   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/auto-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:32:17.778933   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/auto-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:32:22.901044   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/auto-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:32:28.860254   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/gvisor-419875/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:32:33.142625   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/auto-896861/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-632198 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.31.1: (1m42.028725675s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (102.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-896861 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (11.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-896861 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-p5v8l" [37813fd7-c5cd-4253-8499-b0644250df45] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-p5v8l" [37813fd7-c5cd-4253-8499-b0644250df45] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 11.005313659s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (11.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-896861 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-896861 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-896861 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.16s)
E0913 19:39:44.995190   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/gvisor-419875/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:39:46.948267   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/skaffold-714540/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:39:49.792123   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/bridge-896861/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (101.62s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-587809 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.31.1
E0913 19:33:13.054486   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/flannel-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:33:13.060992   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/flannel-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:33:13.072512   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/flannel-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:33:13.094042   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/flannel-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:33:13.135499   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/flannel-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:33:13.217026   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/flannel-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:33:13.378609   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/flannel-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:33:13.700780   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/flannel-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:33:14.343097   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/flannel-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:33:15.625293   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/flannel-896861/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-587809 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.31.1: (1m41.62324555s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (101.62s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-729851 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9c7d4964-db84-4e28-b957-bad6b1ba5140] Pending
E0913 19:33:18.187126   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/flannel-896861/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [9c7d4964-db84-4e28-b957-bad6b1ba5140] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [9c7d4964-db84-4e28-b957-bad6b1ba5140] Running
E0913 19:33:23.309095   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/flannel-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:33:23.879908   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/skaffold-714540/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.004008649s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-729851 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-729851 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-729851 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.040921542s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-729851 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (13.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-729851 --alsologtostderr -v=3
E0913 19:33:30.922501   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/functional-988520/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:33:33.550683   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/flannel-896861/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-729851 --alsologtostderr -v=3: (13.375073719s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (13.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-632198 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
E0913 19:33:34.586146   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/auto-896861/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [9c32ec94-0fb1-4952-9416-d528ab049980] Pending
E0913 19:33:35.036791   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/enable-default-cni-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:33:35.043176   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/enable-default-cni-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:33:35.054608   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/enable-default-cni-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:33:35.076069   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/enable-default-cni-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:33:35.117512   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/enable-default-cni-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:33:35.199079   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/enable-default-cni-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:33:35.360774   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/enable-default-cni-896861/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [9c32ec94-0fb1-4952-9416-d528ab049980] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0913 19:33:35.682944   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/enable-default-cni-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:33:36.324517   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/enable-default-cni-896861/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [9c32ec94-0fb1-4952-9416-d528ab049980] Running
E0913 19:33:37.606738   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/enable-default-cni-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:33:40.168538   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/enable-default-cni-896861/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004353144s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-632198 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-729851 -n no-preload-729851
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-729851 -n no-preload-729851: exit status 7 (63.586296ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-729851 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (304.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-729851 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-729851 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.31.1: (5m3.82712521s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-729851 -n no-preload-729851
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (304.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-632198 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-632198 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.332281994s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-632198 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (13.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-632198 --alsologtostderr -v=3
E0913 19:33:45.290878   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/enable-default-cni-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:33:47.848886   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/functional-988520/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:33:54.032374   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/flannel-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:33:55.532814   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/enable-default-cni-896861/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-632198 --alsologtostderr -v=3: (13.390462083s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (13.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-632198 -n embed-certs-632198
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-632198 -n embed-certs-632198: exit status 7 (77.41141ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-632198 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (316.59s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-632198 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.31.1
E0913 19:34:16.014690   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/enable-default-cni-896861/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-632198 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.31.1: (5m16.306500033s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-632198 -n embed-certs-632198
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (316.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.62s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-716425 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [285fc185-86d6-4d9b-9a94-2f055ebe8ac6] Pending
E0913 19:34:22.086699   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/bridge-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:34:22.093248   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/bridge-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:34:22.104821   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/bridge-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:34:22.126370   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/bridge-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:34:22.168007   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/bridge-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:34:22.249471   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/bridge-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:34:22.411530   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/bridge-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:34:22.733025   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/bridge-896861/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [285fc185-86d6-4d9b-9a94-2f055ebe8ac6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0913 19:34:23.375033   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/bridge-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:34:24.102203   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/addons-084503/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:34:24.657309   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/bridge-896861/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [285fc185-86d6-4d9b-9a94-2f055ebe8ac6] Running
E0913 19:34:27.219526   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/bridge-896861/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.005179032s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-716425 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.62s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-716425 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-716425 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-716425 --alsologtostderr -v=3
E0913 19:34:32.341654   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/bridge-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:34:34.994072   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/flannel-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:34:42.584874   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/bridge-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:34:44.996206   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/gvisor-419875/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-716425 --alsologtostderr -v=3: (13.422029009s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-716425 -n old-k8s-version-716425
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-716425 -n old-k8s-version-716425: exit status 7 (64.436861ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-716425 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (456.73s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-716425 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-716425 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0: (7m36.473537192s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-716425 -n old-k8s-version-716425
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (456.73s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-587809 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2cc680ab-7eea-4273-993c-ad615f6c75a7] Pending
helpers_test.go:344: "busybox" [2cc680ab-7eea-4273-993c-ad615f6c75a7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2cc680ab-7eea-4273-993c-ad615f6c75a7] Running
E0913 19:34:56.507615   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/auto-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:34:56.976143   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/enable-default-cni-896861/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004488676s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-587809 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-587809 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-587809 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (13.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-587809 --alsologtostderr -v=3
E0913 19:35:03.066248   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/bridge-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:35:12.701935   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/gvisor-419875/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-587809 --alsologtostderr -v=3: (13.344953599s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (13.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-587809 -n default-k8s-diff-port-587809
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-587809 -n default-k8s-diff-port-587809: exit status 7 (89.522815ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-587809 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (300.8s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-587809 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.31.1
E0913 19:35:27.719860   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/kubenet-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:35:27.726318   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/kubenet-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:35:27.737872   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/kubenet-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:35:27.759414   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/kubenet-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:35:27.801034   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/kubenet-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:35:27.882801   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/kubenet-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:35:28.044630   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/kubenet-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:35:28.366391   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/kubenet-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:35:29.008449   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/kubenet-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:35:30.290175   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/kubenet-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:35:32.851904   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/kubenet-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:35:37.974079   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/kubenet-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:35:44.028627   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/bridge-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:35:48.215999   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/kubenet-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:35:54.027948   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/calico-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:35:54.034378   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/calico-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:35:54.045844   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/calico-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:35:54.067283   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/calico-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:35:54.108913   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/calico-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:35:54.190404   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/calico-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:35:54.351921   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/calico-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:35:54.673517   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/calico-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:35:55.315149   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/calico-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:35:56.597467   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/calico-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:35:56.915389   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/flannel-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:35:59.158859   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/calico-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:36:04.280233   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/calico-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:36:08.698140   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/kubenet-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:36:08.837781   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/kindnet-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:36:08.844254   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/kindnet-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:36:08.855720   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/kindnet-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:36:08.877225   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/kindnet-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:36:08.918670   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/kindnet-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:36:09.000151   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/kindnet-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:36:09.161781   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/kindnet-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:36:09.483560   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/kindnet-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:36:10.125774   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/kindnet-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:36:11.407131   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/kindnet-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:36:13.968784   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/kindnet-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:36:14.522421   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/calico-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:36:18.898190   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/enable-default-cni-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:36:19.090892   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/kindnet-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:36:23.575550   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/custom-flannel-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:36:23.582004   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/custom-flannel-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:36:23.593473   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/custom-flannel-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:36:23.615161   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/custom-flannel-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:36:23.656669   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/custom-flannel-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:36:23.738173   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/custom-flannel-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:36:23.899812   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/custom-flannel-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:36:24.221822   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/custom-flannel-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:36:24.863558   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/custom-flannel-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:36:26.144958   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/custom-flannel-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:36:28.707157   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/custom-flannel-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:36:29.332681   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/kindnet-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:36:33.828482   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/custom-flannel-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:36:35.004293   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/calico-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:36:44.070151   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/custom-flannel-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:36:49.659573   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/kubenet-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:36:49.814074   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/kindnet-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:37:04.551811   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/custom-flannel-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:37:05.950708   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/bridge-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:37:12.647227   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/auto-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:37:15.966482   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/calico-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:37:30.776043   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/kindnet-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:37:39.199796   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/false-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:37:39.206264   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/false-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:37:39.217801   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/false-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:37:39.239340   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/false-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:37:39.280835   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/false-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:37:39.362371   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/false-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:37:39.523980   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/false-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:37:39.845561   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/false-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:37:40.349079   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/auto-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:37:40.487567   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/false-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:37:41.768923   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/false-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:37:44.330236   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/false-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:37:45.513381   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/custom-flannel-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:37:49.451548   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/false-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:37:59.693622   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/false-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:38:11.581649   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/kubenet-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:38:13.054122   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/flannel-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:38:20.175091   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/false-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:38:23.880138   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/skaffold-714540/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:38:35.036885   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/enable-default-cni-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:38:37.888739   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/calico-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:38:40.757726   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/flannel-896861/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-587809 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.31.1: (5m0.537754094s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-587809 -n default-k8s-diff-port-587809
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (300.80s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-qtfkk" [8d4f5ebe-9461-4a6d-b579-bad3e2dc8ed5] Running
E0913 19:38:47.849099   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/functional-988520/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:38:52.698043   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/kindnet-896861/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005489798s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-qtfkk" [8d4f5ebe-9461-4a6d-b579-bad3e2dc8ed5] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012598629s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-729851 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-729851 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.7s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-729851 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-729851 -n no-preload-729851
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-729851 -n no-preload-729851: exit status 2 (271.26284ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-729851 -n no-preload-729851
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-729851 -n no-preload-729851: exit status 2 (272.122309ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-729851 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-729851 -n no-preload-729851
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-729851 -n no-preload-729851
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.70s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (64.89s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-591357 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.31.1
E0913 19:39:02.740044   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/enable-default-cni-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:39:07.435017   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/custom-flannel-896861/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-591357 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.31.1: (1m4.889723466s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (64.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (11.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-qt7gs" [3da8f236-d00b-40ec-bde6-296bff671098] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-695b96c756-qt7gs" [3da8f236-d00b-40ec-bde6-296bff671098] Running
E0913 19:39:22.086411   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/bridge-896861/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:39:24.103098   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/addons-084503/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.004990971s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (11.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-qt7gs" [3da8f236-d00b-40ec-bde6-296bff671098] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005475484s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-632198 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-632198 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.66s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-632198 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-632198 -n embed-certs-632198
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-632198 -n embed-certs-632198: exit status 2 (235.71541ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-632198 -n embed-certs-632198
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-632198 -n embed-certs-632198: exit status 2 (244.323783ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-632198 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-632198 -n embed-certs-632198
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-632198 -n embed-certs-632198
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.66s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-591357 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.99s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (13.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-591357 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-591357 --alsologtostderr -v=3: (13.342461425s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (13.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-zsscz" [1879b0b4-63c4-432f-8127-d3db9584f13c] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004249648s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-zsscz" [1879b0b4-63c4-432f-8127-d3db9584f13c] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004902761s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-587809 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-591357 -n newest-cni-591357
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-591357 -n newest-cni-591357: exit status 7 (64.073023ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-591357 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (36.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-591357 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.31.1
E0913 19:40:23.059433   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/false-896861/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-591357 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.31.1: (35.918259101s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-591357 -n newest-cni-591357
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (36.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-587809 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.46s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-587809 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-587809 -n default-k8s-diff-port-587809
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-587809 -n default-k8s-diff-port-587809: exit status 2 (256.025941ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-587809 -n default-k8s-diff-port-587809
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-587809 -n default-k8s-diff-port-587809: exit status 2 (255.495552ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-587809 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-587809 -n default-k8s-diff-port-587809
E0913 19:40:27.719376   11050 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3886/.minikube/profiles/kubenet-896861/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-587809 -n default-k8s-diff-port-587809
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.46s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-591357 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-591357 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-591357 -n newest-cni-591357
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-591357 -n newest-cni-591357: exit status 2 (227.434027ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-591357 -n newest-cni-591357
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-591357 -n newest-cni-591357: exit status 2 (233.331266ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-591357 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-591357 -n newest-cni-591357
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-591357 -n newest-cni-591357
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-8q2vb" [e3fcb4c9-5634-4b2b-bef5-ee8665f90e96] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003914936s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-8q2vb" [e3fcb4c9-5634-4b2b-bef5-ee8665f90e96] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004444303s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-716425 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-716425 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-716425 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-716425 -n old-k8s-version-716425
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-716425 -n old-k8s-version-716425: exit status 2 (235.46446ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-716425 -n old-k8s-version-716425
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-716425 -n old-k8s-version-716425: exit status 2 (237.267593ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-716425 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-716425 -n old-k8s-version-716425
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-716425 -n old-k8s-version-716425
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.34s)

                                                
                                    

Test skip (31/340)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:438: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-896861 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-896861

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-896861

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-896861

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-896861

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-896861

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-896861

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-896861

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-896861

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-896861

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-896861

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-896861" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896861"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-896861" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896861"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-896861" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896861"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-896861

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-896861" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896861"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-896861" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896861"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-896861" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-896861" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-896861" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-896861" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-896861" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-896861" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-896861" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-896861" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-896861" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896861"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-896861" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896861"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-896861" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896861"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-896861" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896861"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-896861" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896861"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-896861

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-896861

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-896861" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-896861" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-896861

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-896861

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-896861" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-896861" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-896861" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-896861" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-896861" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-896861" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896861"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-896861" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896861"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-896861" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896861"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-896861" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896861"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-896861" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896861"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-896861

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-896861" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896861"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-896861" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896861"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-896861" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896861"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-896861" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896861"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-896861" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896861"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-896861" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896861"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-896861" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896861"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-896861" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896861"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-896861" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896861"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-896861" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896861"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-896861" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896861"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-896861" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896861"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-896861" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896861"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-896861" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896861"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-896861" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896861"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-896861" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896861"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-896861" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896861"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-896861" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-896861"

                                                
                                                
----------------------- debugLogs end: cilium-896861 [took: 3.579990816s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-896861" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-896861
--- SKIP: TestNetworkPlugins/group/cilium (3.74s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-644987" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-644987
--- SKIP: TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                    
Copied to clipboard