Test Report: KVM_Linux 19667

                    
                      39f19baf3a7e1c810682dda0eb22abd909c6f2ab:2024-09-18:36273
                    
                

Test fail (1/341)

Order failed test Duration
33 TestAddons/parallel/Registry 74.9
x
+
TestAddons/parallel/Registry (74.9s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 4.037129ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-vvhf9" [456d61ad-102d-4e2c-9b99-6bbce9fe2788] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004099413s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-vrcfx" [167c8d8a-ffcc-4e8d-be37-1288a0ac0e73] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.005430848s
addons_test.go:342: (dbg) Run:  kubectl --context addons-656419 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-656419 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-656419 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.087636536s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-656419 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-656419 ip
2024/09/18 19:52:46 [DEBUG] GET http://192.168.39.154:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-656419 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-656419 -n addons-656419
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-656419 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-656419 logs -n 25: (1.139918491s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-479685                                                                     | download-only-479685 | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC | 18 Sep 24 19:38 UTC |
	| delete  | -p download-only-174579                                                                     | download-only-174579 | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC | 18 Sep 24 19:38 UTC |
	| delete  | -p download-only-479685                                                                     | download-only-479685 | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC | 18 Sep 24 19:38 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-429071 | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC |                     |
	|         | binary-mirror-429071                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:43717                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-429071                                                                     | binary-mirror-429071 | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC | 18 Sep 24 19:38 UTC |
	| addons  | disable dashboard -p                                                                        | addons-656419        | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC |                     |
	|         | addons-656419                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-656419        | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC |                     |
	|         | addons-656419                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-656419 --wait=true                                                                | addons-656419        | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC | 18 Sep 24 19:42 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2  --addons=ingress                                                             |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-656419 addons disable                                                                | addons-656419        | jenkins | v1.34.0 | 18 Sep 24 19:43 UTC | 18 Sep 24 19:43 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-656419        | jenkins | v1.34.0 | 18 Sep 24 19:51 UTC | 18 Sep 24 19:51 UTC |
	|         | -p addons-656419                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-656419        | jenkins | v1.34.0 | 18 Sep 24 19:51 UTC | 18 Sep 24 19:51 UTC |
	|         | -p addons-656419                                                                            |                      |         |         |                     |                     |
	| addons  | addons-656419 addons disable                                                                | addons-656419        | jenkins | v1.34.0 | 18 Sep 24 19:51 UTC | 18 Sep 24 19:51 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-656419 addons disable                                                                | addons-656419        | jenkins | v1.34.0 | 18 Sep 24 19:51 UTC | 18 Sep 24 19:51 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-656419        | jenkins | v1.34.0 | 18 Sep 24 19:51 UTC | 18 Sep 24 19:51 UTC |
	|         | addons-656419                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-656419 ssh cat                                                                       | addons-656419        | jenkins | v1.34.0 | 18 Sep 24 19:51 UTC | 18 Sep 24 19:51 UTC |
	|         | /opt/local-path-provisioner/pvc-fe948a04-a782-481a-a290-a6b15945d18d_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-656419 addons disable                                                                | addons-656419        | jenkins | v1.34.0 | 18 Sep 24 19:51 UTC | 18 Sep 24 19:52 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-656419        | jenkins | v1.34.0 | 18 Sep 24 19:51 UTC | 18 Sep 24 19:52 UTC |
	|         | addons-656419                                                                               |                      |         |         |                     |                     |
	| addons  | addons-656419 addons                                                                        | addons-656419        | jenkins | v1.34.0 | 18 Sep 24 19:52 UTC | 18 Sep 24 19:52 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-656419 addons disable                                                                | addons-656419        | jenkins | v1.34.0 | 18 Sep 24 19:52 UTC | 18 Sep 24 19:52 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ssh     | addons-656419 ssh curl -s                                                                   | addons-656419        | jenkins | v1.34.0 | 18 Sep 24 19:52 UTC | 18 Sep 24 19:52 UTC |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ip      | addons-656419 ip                                                                            | addons-656419        | jenkins | v1.34.0 | 18 Sep 24 19:52 UTC | 18 Sep 24 19:52 UTC |
	| addons  | addons-656419 addons disable                                                                | addons-656419        | jenkins | v1.34.0 | 18 Sep 24 19:52 UTC | 18 Sep 24 19:52 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-656419 addons disable                                                                | addons-656419        | jenkins | v1.34.0 | 18 Sep 24 19:52 UTC | 18 Sep 24 19:52 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| ip      | addons-656419 ip                                                                            | addons-656419        | jenkins | v1.34.0 | 18 Sep 24 19:52 UTC | 18 Sep 24 19:52 UTC |
	| addons  | addons-656419 addons disable                                                                | addons-656419        | jenkins | v1.34.0 | 18 Sep 24 19:52 UTC | 18 Sep 24 19:52 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/18 19:38:07
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0918 19:38:07.822880   15490 out.go:345] Setting OutFile to fd 1 ...
	I0918 19:38:07.823118   15490 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 19:38:07.823126   15490 out.go:358] Setting ErrFile to fd 2...
	I0918 19:38:07.823130   15490 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 19:38:07.823322   15490 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-7655/.minikube/bin
	I0918 19:38:07.823945   15490 out.go:352] Setting JSON to false
	I0918 19:38:07.824812   15490 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":1236,"bootTime":1726687052,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0918 19:38:07.824946   15490 start.go:139] virtualization: kvm guest
	I0918 19:38:07.827944   15490 out.go:177] * [addons-656419] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0918 19:38:07.830237   15490 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 19:38:07.830230   15490 notify.go:220] Checking for updates...
	I0918 19:38:07.832075   15490 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 19:38:07.833913   15490 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19667-7655/kubeconfig
	I0918 19:38:07.836056   15490 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-7655/.minikube
	I0918 19:38:07.837760   15490 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0918 19:38:07.840465   15490 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 19:38:07.843087   15490 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 19:38:07.878983   15490 out.go:177] * Using the kvm2 driver based on user configuration
	I0918 19:38:07.881153   15490 start.go:297] selected driver: kvm2
	I0918 19:38:07.881177   15490 start.go:901] validating driver "kvm2" against <nil>
	I0918 19:38:07.881190   15490 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 19:38:07.882043   15490 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 19:38:07.882120   15490 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19667-7655/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0918 19:38:07.897660   15490 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0918 19:38:07.897729   15490 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0918 19:38:07.898012   15490 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 19:38:07.898049   15490 cni.go:84] Creating CNI manager for ""
	I0918 19:38:07.898114   15490 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0918 19:38:07.898129   15490 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0918 19:38:07.898222   15490 start.go:340] cluster config:
	{Name:addons-656419 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-656419 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:d
ocker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 19:38:07.898354   15490 iso.go:125] acquiring lock: {Name:mk994b84cbc98dd0805f97e4539c3d8a9e02e7d2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 19:38:07.900564   15490 out.go:177] * Starting "addons-656419" primary control-plane node in "addons-656419" cluster
	I0918 19:38:07.902339   15490 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0918 19:38:07.902395   15490 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19667-7655/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0918 19:38:07.902405   15490 cache.go:56] Caching tarball of preloaded images
	I0918 19:38:07.902492   15490 preload.go:172] Found /home/jenkins/minikube-integration/19667-7655/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0918 19:38:07.902505   15490 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0918 19:38:07.902875   15490 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/addons-656419/config.json ...
	I0918 19:38:07.902904   15490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/addons-656419/config.json: {Name:mk62c2d1f939857f52c235fd9fc7b9fd30deff61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:38:07.903078   15490 start.go:360] acquireMachinesLock for addons-656419: {Name:mk02bc1bf0d9da1bb27db0e41b522a72a5f90c1d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 19:38:07.903140   15490 start.go:364] duration metric: took 43.865µs to acquireMachinesLock for "addons-656419"
	I0918 19:38:07.903165   15490 start.go:93] Provisioning new machine with config: &{Name:addons-656419 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-656419 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 19:38:07.903246   15490 start.go:125] createHost starting for "" (driver="kvm2")
	I0918 19:38:07.905379   15490 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0918 19:38:07.905541   15490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0918 19:38:07.905586   15490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:38:07.920498   15490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42387
	I0918 19:38:07.920986   15490 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:38:07.921603   15490 main.go:141] libmachine: Using API Version  1
	I0918 19:38:07.921623   15490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:38:07.921975   15490 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:38:07.922161   15490 main.go:141] libmachine: (addons-656419) Calling .GetMachineName
	I0918 19:38:07.922358   15490 main.go:141] libmachine: (addons-656419) Calling .DriverName
	I0918 19:38:07.922516   15490 start.go:159] libmachine.API.Create for "addons-656419" (driver="kvm2")
	I0918 19:38:07.922544   15490 client.go:168] LocalClient.Create starting
	I0918 19:38:07.922582   15490 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19667-7655/.minikube/certs/ca.pem
	I0918 19:38:08.260709   15490 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19667-7655/.minikube/certs/cert.pem
	I0918 19:38:08.371652   15490 main.go:141] libmachine: Running pre-create checks...
	I0918 19:38:08.371678   15490 main.go:141] libmachine: (addons-656419) Calling .PreCreateCheck
	I0918 19:38:08.372211   15490 main.go:141] libmachine: (addons-656419) Calling .GetConfigRaw
	I0918 19:38:08.372702   15490 main.go:141] libmachine: Creating machine...
	I0918 19:38:08.372718   15490 main.go:141] libmachine: (addons-656419) Calling .Create
	I0918 19:38:08.372948   15490 main.go:141] libmachine: (addons-656419) Creating KVM machine...
	I0918 19:38:08.374373   15490 main.go:141] libmachine: (addons-656419) DBG | found existing default KVM network
	I0918 19:38:08.375092   15490 main.go:141] libmachine: (addons-656419) DBG | I0918 19:38:08.374954   15511 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000014fa0}
	I0918 19:38:08.375116   15490 main.go:141] libmachine: (addons-656419) DBG | created network xml: 
	I0918 19:38:08.375129   15490 main.go:141] libmachine: (addons-656419) DBG | <network>
	I0918 19:38:08.375142   15490 main.go:141] libmachine: (addons-656419) DBG |   <name>mk-addons-656419</name>
	I0918 19:38:08.375154   15490 main.go:141] libmachine: (addons-656419) DBG |   <dns enable='no'/>
	I0918 19:38:08.375163   15490 main.go:141] libmachine: (addons-656419) DBG |   
	I0918 19:38:08.375173   15490 main.go:141] libmachine: (addons-656419) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0918 19:38:08.375181   15490 main.go:141] libmachine: (addons-656419) DBG |     <dhcp>
	I0918 19:38:08.375191   15490 main.go:141] libmachine: (addons-656419) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0918 19:38:08.375198   15490 main.go:141] libmachine: (addons-656419) DBG |     </dhcp>
	I0918 19:38:08.375207   15490 main.go:141] libmachine: (addons-656419) DBG |   </ip>
	I0918 19:38:08.375216   15490 main.go:141] libmachine: (addons-656419) DBG |   
	I0918 19:38:08.375224   15490 main.go:141] libmachine: (addons-656419) DBG | </network>
	I0918 19:38:08.375234   15490 main.go:141] libmachine: (addons-656419) DBG | 
	I0918 19:38:08.381191   15490 main.go:141] libmachine: (addons-656419) DBG | trying to create private KVM network mk-addons-656419 192.168.39.0/24...
	I0918 19:38:08.450913   15490 main.go:141] libmachine: (addons-656419) DBG | private KVM network mk-addons-656419 192.168.39.0/24 created
	I0918 19:38:08.450935   15490 main.go:141] libmachine: (addons-656419) DBG | I0918 19:38:08.450869   15511 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19667-7655/.minikube
	I0918 19:38:08.450995   15490 main.go:141] libmachine: (addons-656419) Setting up store path in /home/jenkins/minikube-integration/19667-7655/.minikube/machines/addons-656419 ...
	I0918 19:38:08.451033   15490 main.go:141] libmachine: (addons-656419) Building disk image from file:///home/jenkins/minikube-integration/19667-7655/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0918 19:38:08.451058   15490 main.go:141] libmachine: (addons-656419) Downloading /home/jenkins/minikube-integration/19667-7655/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19667-7655/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0918 19:38:08.771279   15490 main.go:141] libmachine: (addons-656419) DBG | I0918 19:38:08.771159   15511 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19667-7655/.minikube/machines/addons-656419/id_rsa...
	I0918 19:38:08.948984   15490 main.go:141] libmachine: (addons-656419) DBG | I0918 19:38:08.948865   15511 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19667-7655/.minikube/machines/addons-656419/addons-656419.rawdisk...
	I0918 19:38:08.949025   15490 main.go:141] libmachine: (addons-656419) DBG | Writing magic tar header
	I0918 19:38:08.949037   15490 main.go:141] libmachine: (addons-656419) DBG | Writing SSH key tar header
	I0918 19:38:08.949099   15490 main.go:141] libmachine: (addons-656419) DBG | I0918 19:38:08.949043   15511 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19667-7655/.minikube/machines/addons-656419 ...
	I0918 19:38:08.949206   15490 main.go:141] libmachine: (addons-656419) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19667-7655/.minikube/machines/addons-656419
	I0918 19:38:08.949226   15490 main.go:141] libmachine: (addons-656419) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19667-7655/.minikube/machines
	I0918 19:38:08.949238   15490 main.go:141] libmachine: (addons-656419) Setting executable bit set on /home/jenkins/minikube-integration/19667-7655/.minikube/machines/addons-656419 (perms=drwx------)
	I0918 19:38:08.949247   15490 main.go:141] libmachine: (addons-656419) Setting executable bit set on /home/jenkins/minikube-integration/19667-7655/.minikube/machines (perms=drwxr-xr-x)
	I0918 19:38:08.949253   15490 main.go:141] libmachine: (addons-656419) Setting executable bit set on /home/jenkins/minikube-integration/19667-7655/.minikube (perms=drwxr-xr-x)
	I0918 19:38:08.949265   15490 main.go:141] libmachine: (addons-656419) Setting executable bit set on /home/jenkins/minikube-integration/19667-7655 (perms=drwxrwxr-x)
	I0918 19:38:08.949281   15490 main.go:141] libmachine: (addons-656419) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0918 19:38:08.949294   15490 main.go:141] libmachine: (addons-656419) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0918 19:38:08.949305   15490 main.go:141] libmachine: (addons-656419) Creating domain...
	I0918 19:38:08.949315   15490 main.go:141] libmachine: (addons-656419) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19667-7655/.minikube
	I0918 19:38:08.949329   15490 main.go:141] libmachine: (addons-656419) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19667-7655
	I0918 19:38:08.949337   15490 main.go:141] libmachine: (addons-656419) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0918 19:38:08.949343   15490 main.go:141] libmachine: (addons-656419) DBG | Checking permissions on dir: /home/jenkins
	I0918 19:38:08.949351   15490 main.go:141] libmachine: (addons-656419) DBG | Checking permissions on dir: /home
	I0918 19:38:08.949362   15490 main.go:141] libmachine: (addons-656419) DBG | Skipping /home - not owner
	I0918 19:38:08.950344   15490 main.go:141] libmachine: (addons-656419) define libvirt domain using xml: 
	I0918 19:38:08.950365   15490 main.go:141] libmachine: (addons-656419) <domain type='kvm'>
	I0918 19:38:08.950384   15490 main.go:141] libmachine: (addons-656419)   <name>addons-656419</name>
	I0918 19:38:08.950392   15490 main.go:141] libmachine: (addons-656419)   <memory unit='MiB'>4000</memory>
	I0918 19:38:08.950408   15490 main.go:141] libmachine: (addons-656419)   <vcpu>2</vcpu>
	I0918 19:38:08.950415   15490 main.go:141] libmachine: (addons-656419)   <features>
	I0918 19:38:08.950420   15490 main.go:141] libmachine: (addons-656419)     <acpi/>
	I0918 19:38:08.950426   15490 main.go:141] libmachine: (addons-656419)     <apic/>
	I0918 19:38:08.950431   15490 main.go:141] libmachine: (addons-656419)     <pae/>
	I0918 19:38:08.950437   15490 main.go:141] libmachine: (addons-656419)     
	I0918 19:38:08.950442   15490 main.go:141] libmachine: (addons-656419)   </features>
	I0918 19:38:08.950448   15490 main.go:141] libmachine: (addons-656419)   <cpu mode='host-passthrough'>
	I0918 19:38:08.950453   15490 main.go:141] libmachine: (addons-656419)   
	I0918 19:38:08.950462   15490 main.go:141] libmachine: (addons-656419)   </cpu>
	I0918 19:38:08.950484   15490 main.go:141] libmachine: (addons-656419)   <os>
	I0918 19:38:08.950505   15490 main.go:141] libmachine: (addons-656419)     <type>hvm</type>
	I0918 19:38:08.950517   15490 main.go:141] libmachine: (addons-656419)     <boot dev='cdrom'/>
	I0918 19:38:08.950586   15490 main.go:141] libmachine: (addons-656419)     <boot dev='hd'/>
	I0918 19:38:08.950664   15490 main.go:141] libmachine: (addons-656419)     <bootmenu enable='no'/>
	I0918 19:38:08.950692   15490 main.go:141] libmachine: (addons-656419)   </os>
	I0918 19:38:08.950699   15490 main.go:141] libmachine: (addons-656419)   <devices>
	I0918 19:38:08.950707   15490 main.go:141] libmachine: (addons-656419)     <disk type='file' device='cdrom'>
	I0918 19:38:08.950717   15490 main.go:141] libmachine: (addons-656419)       <source file='/home/jenkins/minikube-integration/19667-7655/.minikube/machines/addons-656419/boot2docker.iso'/>
	I0918 19:38:08.950727   15490 main.go:141] libmachine: (addons-656419)       <target dev='hdc' bus='scsi'/>
	I0918 19:38:08.950732   15490 main.go:141] libmachine: (addons-656419)       <readonly/>
	I0918 19:38:08.950739   15490 main.go:141] libmachine: (addons-656419)     </disk>
	I0918 19:38:08.950745   15490 main.go:141] libmachine: (addons-656419)     <disk type='file' device='disk'>
	I0918 19:38:08.950753   15490 main.go:141] libmachine: (addons-656419)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0918 19:38:08.950762   15490 main.go:141] libmachine: (addons-656419)       <source file='/home/jenkins/minikube-integration/19667-7655/.minikube/machines/addons-656419/addons-656419.rawdisk'/>
	I0918 19:38:08.950772   15490 main.go:141] libmachine: (addons-656419)       <target dev='hda' bus='virtio'/>
	I0918 19:38:08.950789   15490 main.go:141] libmachine: (addons-656419)     </disk>
	I0918 19:38:08.950805   15490 main.go:141] libmachine: (addons-656419)     <interface type='network'>
	I0918 19:38:08.950826   15490 main.go:141] libmachine: (addons-656419)       <source network='mk-addons-656419'/>
	I0918 19:38:08.950843   15490 main.go:141] libmachine: (addons-656419)       <model type='virtio'/>
	I0918 19:38:08.950855   15490 main.go:141] libmachine: (addons-656419)     </interface>
	I0918 19:38:08.950865   15490 main.go:141] libmachine: (addons-656419)     <interface type='network'>
	I0918 19:38:08.950877   15490 main.go:141] libmachine: (addons-656419)       <source network='default'/>
	I0918 19:38:08.950886   15490 main.go:141] libmachine: (addons-656419)       <model type='virtio'/>
	I0918 19:38:08.950896   15490 main.go:141] libmachine: (addons-656419)     </interface>
	I0918 19:38:08.950903   15490 main.go:141] libmachine: (addons-656419)     <serial type='pty'>
	I0918 19:38:08.950914   15490 main.go:141] libmachine: (addons-656419)       <target port='0'/>
	I0918 19:38:08.950927   15490 main.go:141] libmachine: (addons-656419)     </serial>
	I0918 19:38:08.950939   15490 main.go:141] libmachine: (addons-656419)     <console type='pty'>
	I0918 19:38:08.950949   15490 main.go:141] libmachine: (addons-656419)       <target type='serial' port='0'/>
	I0918 19:38:08.950960   15490 main.go:141] libmachine: (addons-656419)     </console>
	I0918 19:38:08.950969   15490 main.go:141] libmachine: (addons-656419)     <rng model='virtio'>
	I0918 19:38:08.950982   15490 main.go:141] libmachine: (addons-656419)       <backend model='random'>/dev/random</backend>
	I0918 19:38:08.950991   15490 main.go:141] libmachine: (addons-656419)     </rng>
	I0918 19:38:08.951002   15490 main.go:141] libmachine: (addons-656419)     
	I0918 19:38:08.951012   15490 main.go:141] libmachine: (addons-656419)     
	I0918 19:38:08.951021   15490 main.go:141] libmachine: (addons-656419)   </devices>
	I0918 19:38:08.951041   15490 main.go:141] libmachine: (addons-656419) </domain>
	I0918 19:38:08.951050   15490 main.go:141] libmachine: (addons-656419) 
	I0918 19:38:08.956603   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined MAC address 52:54:00:ea:2f:45 in network default
	I0918 19:38:08.957156   15490 main.go:141] libmachine: (addons-656419) Ensuring networks are active...
	I0918 19:38:08.957175   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:38:08.957849   15490 main.go:141] libmachine: (addons-656419) Ensuring network default is active
	I0918 19:38:08.958157   15490 main.go:141] libmachine: (addons-656419) Ensuring network mk-addons-656419 is active
	I0918 19:38:08.958664   15490 main.go:141] libmachine: (addons-656419) Getting domain xml...
	I0918 19:38:08.959360   15490 main.go:141] libmachine: (addons-656419) Creating domain...
	I0918 19:38:10.452326   15490 main.go:141] libmachine: (addons-656419) Waiting to get IP...
	I0918 19:38:10.453168   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:38:10.453678   15490 main.go:141] libmachine: (addons-656419) DBG | unable to find current IP address of domain addons-656419 in network mk-addons-656419
	I0918 19:38:10.453695   15490 main.go:141] libmachine: (addons-656419) DBG | I0918 19:38:10.453660   15511 retry.go:31] will retry after 271.242708ms: waiting for machine to come up
	I0918 19:38:10.726114   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:38:10.726527   15490 main.go:141] libmachine: (addons-656419) DBG | unable to find current IP address of domain addons-656419 in network mk-addons-656419
	I0918 19:38:10.726554   15490 main.go:141] libmachine: (addons-656419) DBG | I0918 19:38:10.726459   15511 retry.go:31] will retry after 273.420539ms: waiting for machine to come up
	I0918 19:38:11.001884   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:38:11.002306   15490 main.go:141] libmachine: (addons-656419) DBG | unable to find current IP address of domain addons-656419 in network mk-addons-656419
	I0918 19:38:11.002333   15490 main.go:141] libmachine: (addons-656419) DBG | I0918 19:38:11.002249   15511 retry.go:31] will retry after 415.881431ms: waiting for machine to come up
	I0918 19:38:11.419872   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:38:11.420343   15490 main.go:141] libmachine: (addons-656419) DBG | unable to find current IP address of domain addons-656419 in network mk-addons-656419
	I0918 19:38:11.420363   15490 main.go:141] libmachine: (addons-656419) DBG | I0918 19:38:11.420309   15511 retry.go:31] will retry after 423.561235ms: waiting for machine to come up
	I0918 19:38:11.845975   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:38:11.846410   15490 main.go:141] libmachine: (addons-656419) DBG | unable to find current IP address of domain addons-656419 in network mk-addons-656419
	I0918 19:38:11.846432   15490 main.go:141] libmachine: (addons-656419) DBG | I0918 19:38:11.846357   15511 retry.go:31] will retry after 482.232776ms: waiting for machine to come up
	I0918 19:38:12.330140   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:38:12.330573   15490 main.go:141] libmachine: (addons-656419) DBG | unable to find current IP address of domain addons-656419 in network mk-addons-656419
	I0918 19:38:12.330598   15490 main.go:141] libmachine: (addons-656419) DBG | I0918 19:38:12.330534   15511 retry.go:31] will retry after 742.231948ms: waiting for machine to come up
	I0918 19:38:13.074269   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:38:13.074734   15490 main.go:141] libmachine: (addons-656419) DBG | unable to find current IP address of domain addons-656419 in network mk-addons-656419
	I0918 19:38:13.074759   15490 main.go:141] libmachine: (addons-656419) DBG | I0918 19:38:13.074672   15511 retry.go:31] will retry after 947.857967ms: waiting for machine to come up
	I0918 19:38:14.024127   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:38:14.024817   15490 main.go:141] libmachine: (addons-656419) DBG | unable to find current IP address of domain addons-656419 in network mk-addons-656419
	I0918 19:38:14.024849   15490 main.go:141] libmachine: (addons-656419) DBG | I0918 19:38:14.024753   15511 retry.go:31] will retry after 1.306274285s: waiting for machine to come up
	I0918 19:38:15.333408   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:38:15.333878   15490 main.go:141] libmachine: (addons-656419) DBG | unable to find current IP address of domain addons-656419 in network mk-addons-656419
	I0918 19:38:15.333891   15490 main.go:141] libmachine: (addons-656419) DBG | I0918 19:38:15.333852   15511 retry.go:31] will retry after 1.716564347s: waiting for machine to come up
	I0918 19:38:17.051636   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:38:17.052075   15490 main.go:141] libmachine: (addons-656419) DBG | unable to find current IP address of domain addons-656419 in network mk-addons-656419
	I0918 19:38:17.052103   15490 main.go:141] libmachine: (addons-656419) DBG | I0918 19:38:17.052037   15511 retry.go:31] will retry after 1.979067546s: waiting for machine to come up
	I0918 19:38:19.033287   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:38:19.033844   15490 main.go:141] libmachine: (addons-656419) DBG | unable to find current IP address of domain addons-656419 in network mk-addons-656419
	I0918 19:38:19.033872   15490 main.go:141] libmachine: (addons-656419) DBG | I0918 19:38:19.033790   15511 retry.go:31] will retry after 2.413616058s: waiting for machine to come up
	I0918 19:38:21.450311   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:38:21.450769   15490 main.go:141] libmachine: (addons-656419) DBG | unable to find current IP address of domain addons-656419 in network mk-addons-656419
	I0918 19:38:21.450795   15490 main.go:141] libmachine: (addons-656419) DBG | I0918 19:38:21.450715   15511 retry.go:31] will retry after 2.38184027s: waiting for machine to come up
	I0918 19:38:23.833822   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:38:23.834343   15490 main.go:141] libmachine: (addons-656419) DBG | unable to find current IP address of domain addons-656419 in network mk-addons-656419
	I0918 19:38:23.834364   15490 main.go:141] libmachine: (addons-656419) DBG | I0918 19:38:23.834296   15511 retry.go:31] will retry after 3.959616921s: waiting for machine to come up
	I0918 19:38:27.798644   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:38:27.799278   15490 main.go:141] libmachine: (addons-656419) DBG | unable to find current IP address of domain addons-656419 in network mk-addons-656419
	I0918 19:38:27.799297   15490 main.go:141] libmachine: (addons-656419) DBG | I0918 19:38:27.799233   15511 retry.go:31] will retry after 4.90137978s: waiting for machine to come up
	I0918 19:38:32.706267   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:38:32.706877   15490 main.go:141] libmachine: (addons-656419) Found IP for machine: 192.168.39.154
	I0918 19:38:32.706902   15490 main.go:141] libmachine: (addons-656419) Reserving static IP address...
	I0918 19:38:32.706921   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has current primary IP address 192.168.39.154 and MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:38:32.707340   15490 main.go:141] libmachine: (addons-656419) DBG | unable to find host DHCP lease matching {name: "addons-656419", mac: "52:54:00:61:13:5c", ip: "192.168.39.154"} in network mk-addons-656419
	I0918 19:38:32.785630   15490 main.go:141] libmachine: (addons-656419) DBG | Getting to WaitForSSH function...
	I0918 19:38:32.785657   15490 main.go:141] libmachine: (addons-656419) Reserved static IP address: 192.168.39.154
	I0918 19:38:32.785670   15490 main.go:141] libmachine: (addons-656419) Waiting for SSH to be available...
	I0918 19:38:32.787968   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:38:32.788399   15490 main.go:141] libmachine: (addons-656419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:5c", ip: ""} in network mk-addons-656419: {Iface:virbr1 ExpiryTime:2024-09-18 20:38:25 +0000 UTC Type:0 Mac:52:54:00:61:13:5c Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:minikube Clientid:01:52:54:00:61:13:5c}
	I0918 19:38:32.788436   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined IP address 192.168.39.154 and MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:38:32.788669   15490 main.go:141] libmachine: (addons-656419) DBG | Using SSH client type: external
	I0918 19:38:32.788686   15490 main.go:141] libmachine: (addons-656419) DBG | Using SSH private key: /home/jenkins/minikube-integration/19667-7655/.minikube/machines/addons-656419/id_rsa (-rw-------)
	I0918 19:38:32.788706   15490 main.go:141] libmachine: (addons-656419) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.154 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19667-7655/.minikube/machines/addons-656419/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0918 19:38:32.788713   15490 main.go:141] libmachine: (addons-656419) DBG | About to run SSH command:
	I0918 19:38:32.788721   15490 main.go:141] libmachine: (addons-656419) DBG | exit 0
	I0918 19:38:32.925043   15490 main.go:141] libmachine: (addons-656419) DBG | SSH cmd err, output: <nil>: 
	I0918 19:38:32.925318   15490 main.go:141] libmachine: (addons-656419) KVM machine creation complete!
	I0918 19:38:32.925677   15490 main.go:141] libmachine: (addons-656419) Calling .GetConfigRaw
	I0918 19:38:32.926342   15490 main.go:141] libmachine: (addons-656419) Calling .DriverName
	I0918 19:38:32.926517   15490 main.go:141] libmachine: (addons-656419) Calling .DriverName
	I0918 19:38:32.926731   15490 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0918 19:38:32.926745   15490 main.go:141] libmachine: (addons-656419) Calling .GetState
	I0918 19:38:32.928194   15490 main.go:141] libmachine: Detecting operating system of created instance...
	I0918 19:38:32.928208   15490 main.go:141] libmachine: Waiting for SSH to be available...
	I0918 19:38:32.928213   15490 main.go:141] libmachine: Getting to WaitForSSH function...
	I0918 19:38:32.928219   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHHostname
	I0918 19:38:32.930672   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:38:32.931073   15490 main.go:141] libmachine: (addons-656419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:5c", ip: ""} in network mk-addons-656419: {Iface:virbr1 ExpiryTime:2024-09-18 20:38:25 +0000 UTC Type:0 Mac:52:54:00:61:13:5c Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-656419 Clientid:01:52:54:00:61:13:5c}
	I0918 19:38:32.931121   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined IP address 192.168.39.154 and MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:38:32.931273   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHPort
	I0918 19:38:32.931443   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHKeyPath
	I0918 19:38:32.931604   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHKeyPath
	I0918 19:38:32.931714   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHUsername
	I0918 19:38:32.931870   15490 main.go:141] libmachine: Using SSH client type: native
	I0918 19:38:32.932052   15490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I0918 19:38:32.932062   15490 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0918 19:38:33.044615   15490 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 19:38:33.044641   15490 main.go:141] libmachine: Detecting the provisioner...
	I0918 19:38:33.044651   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHHostname
	I0918 19:38:33.047871   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:38:33.048201   15490 main.go:141] libmachine: (addons-656419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:5c", ip: ""} in network mk-addons-656419: {Iface:virbr1 ExpiryTime:2024-09-18 20:38:25 +0000 UTC Type:0 Mac:52:54:00:61:13:5c Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-656419 Clientid:01:52:54:00:61:13:5c}
	I0918 19:38:33.048231   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined IP address 192.168.39.154 and MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:38:33.048393   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHPort
	I0918 19:38:33.048588   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHKeyPath
	I0918 19:38:33.048727   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHKeyPath
	I0918 19:38:33.048855   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHUsername
	I0918 19:38:33.049002   15490 main.go:141] libmachine: Using SSH client type: native
	I0918 19:38:33.049177   15490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I0918 19:38:33.049191   15490 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0918 19:38:33.161835   15490 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0918 19:38:33.161908   15490 main.go:141] libmachine: found compatible host: buildroot
	I0918 19:38:33.161921   15490 main.go:141] libmachine: Provisioning with buildroot...
	I0918 19:38:33.161933   15490 main.go:141] libmachine: (addons-656419) Calling .GetMachineName
	I0918 19:38:33.162221   15490 buildroot.go:166] provisioning hostname "addons-656419"
	I0918 19:38:33.162250   15490 main.go:141] libmachine: (addons-656419) Calling .GetMachineName
	I0918 19:38:33.162445   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHHostname
	I0918 19:38:33.165459   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:38:33.165780   15490 main.go:141] libmachine: (addons-656419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:5c", ip: ""} in network mk-addons-656419: {Iface:virbr1 ExpiryTime:2024-09-18 20:38:25 +0000 UTC Type:0 Mac:52:54:00:61:13:5c Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-656419 Clientid:01:52:54:00:61:13:5c}
	I0918 19:38:33.165809   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined IP address 192.168.39.154 and MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:38:33.165964   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHPort
	I0918 19:38:33.166161   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHKeyPath
	I0918 19:38:33.166356   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHKeyPath
	I0918 19:38:33.166565   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHUsername
	I0918 19:38:33.166727   15490 main.go:141] libmachine: Using SSH client type: native
	I0918 19:38:33.166946   15490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I0918 19:38:33.166966   15490 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-656419 && echo "addons-656419" | sudo tee /etc/hostname
	I0918 19:38:33.297109   15490 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-656419
	
	I0918 19:38:33.297136   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHHostname
	I0918 19:38:33.300052   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:38:33.300447   15490 main.go:141] libmachine: (addons-656419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:5c", ip: ""} in network mk-addons-656419: {Iface:virbr1 ExpiryTime:2024-09-18 20:38:25 +0000 UTC Type:0 Mac:52:54:00:61:13:5c Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-656419 Clientid:01:52:54:00:61:13:5c}
	I0918 19:38:33.300481   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined IP address 192.168.39.154 and MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:38:33.300732   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHPort
	I0918 19:38:33.301001   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHKeyPath
	I0918 19:38:33.301208   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHKeyPath
	I0918 19:38:33.301348   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHUsername
	I0918 19:38:33.301519   15490 main.go:141] libmachine: Using SSH client type: native
	I0918 19:38:33.301726   15490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I0918 19:38:33.301749   15490 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-656419' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-656419/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-656419' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0918 19:38:33.423691   15490 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 19:38:33.423723   15490 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19667-7655/.minikube CaCertPath:/home/jenkins/minikube-integration/19667-7655/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19667-7655/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19667-7655/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19667-7655/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19667-7655/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19667-7655/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19667-7655/.minikube}
	I0918 19:38:33.423771   15490 buildroot.go:174] setting up certificates
	I0918 19:38:33.423783   15490 provision.go:84] configureAuth start
	I0918 19:38:33.423795   15490 main.go:141] libmachine: (addons-656419) Calling .GetMachineName
	I0918 19:38:33.424054   15490 main.go:141] libmachine: (addons-656419) Calling .GetIP
	I0918 19:38:33.426907   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:38:33.427334   15490 main.go:141] libmachine: (addons-656419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:5c", ip: ""} in network mk-addons-656419: {Iface:virbr1 ExpiryTime:2024-09-18 20:38:25 +0000 UTC Type:0 Mac:52:54:00:61:13:5c Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-656419 Clientid:01:52:54:00:61:13:5c}
	I0918 19:38:33.427383   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined IP address 192.168.39.154 and MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:38:33.427518   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHHostname
	I0918 19:38:33.430145   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:38:33.430461   15490 main.go:141] libmachine: (addons-656419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:5c", ip: ""} in network mk-addons-656419: {Iface:virbr1 ExpiryTime:2024-09-18 20:38:25 +0000 UTC Type:0 Mac:52:54:00:61:13:5c Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-656419 Clientid:01:52:54:00:61:13:5c}
	I0918 19:38:33.430485   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined IP address 192.168.39.154 and MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:38:33.430664   15490 provision.go:143] copyHostCerts
	I0918 19:38:33.430758   15490 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7655/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19667-7655/.minikube/cert.pem (1123 bytes)
	I0918 19:38:33.430884   15490 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7655/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19667-7655/.minikube/key.pem (1679 bytes)
	I0918 19:38:33.430946   15490 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7655/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19667-7655/.minikube/ca.pem (1078 bytes)
	I0918 19:38:33.430998   15490 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19667-7655/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19667-7655/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19667-7655/.minikube/certs/ca-key.pem org=jenkins.addons-656419 san=[127.0.0.1 192.168.39.154 addons-656419 localhost minikube]
	I0918 19:38:33.536626   15490 provision.go:177] copyRemoteCerts
	I0918 19:38:33.536683   15490 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0918 19:38:33.536705   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHHostname
	I0918 19:38:33.539724   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:38:33.540096   15490 main.go:141] libmachine: (addons-656419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:5c", ip: ""} in network mk-addons-656419: {Iface:virbr1 ExpiryTime:2024-09-18 20:38:25 +0000 UTC Type:0 Mac:52:54:00:61:13:5c Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-656419 Clientid:01:52:54:00:61:13:5c}
	I0918 19:38:33.540115   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined IP address 192.168.39.154 and MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:38:33.540389   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHPort
	I0918 19:38:33.540559   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHKeyPath
	I0918 19:38:33.540713   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHUsername
	I0918 19:38:33.540890   15490 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7655/.minikube/machines/addons-656419/id_rsa Username:docker}
	I0918 19:38:33.627322   15490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7655/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0918 19:38:33.655682   15490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7655/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0918 19:38:33.684055   15490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7655/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0918 19:38:33.709864   15490 provision.go:87] duration metric: took 286.064154ms to configureAuth
	I0918 19:38:33.709895   15490 buildroot.go:189] setting minikube options for container-runtime
	I0918 19:38:33.710072   15490 config.go:182] Loaded profile config "addons-656419": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 19:38:33.710096   15490 main.go:141] libmachine: (addons-656419) Calling .DriverName
	I0918 19:38:33.710365   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHHostname
	I0918 19:38:33.713426   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:38:33.713790   15490 main.go:141] libmachine: (addons-656419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:5c", ip: ""} in network mk-addons-656419: {Iface:virbr1 ExpiryTime:2024-09-18 20:38:25 +0000 UTC Type:0 Mac:52:54:00:61:13:5c Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-656419 Clientid:01:52:54:00:61:13:5c}
	I0918 19:38:33.713815   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined IP address 192.168.39.154 and MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:38:33.713975   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHPort
	I0918 19:38:33.714177   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHKeyPath
	I0918 19:38:33.714374   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHKeyPath
	I0918 19:38:33.714532   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHUsername
	I0918 19:38:33.714676   15490 main.go:141] libmachine: Using SSH client type: native
	I0918 19:38:33.714874   15490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I0918 19:38:33.714890   15490 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0918 19:38:33.826776   15490 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0918 19:38:33.826809   15490 buildroot.go:70] root file system type: tmpfs
	I0918 19:38:33.826958   15490 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0918 19:38:33.826982   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHHostname
	I0918 19:38:33.830104   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:38:33.830574   15490 main.go:141] libmachine: (addons-656419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:5c", ip: ""} in network mk-addons-656419: {Iface:virbr1 ExpiryTime:2024-09-18 20:38:25 +0000 UTC Type:0 Mac:52:54:00:61:13:5c Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-656419 Clientid:01:52:54:00:61:13:5c}
	I0918 19:38:33.830607   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined IP address 192.168.39.154 and MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:38:33.830787   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHPort
	I0918 19:38:33.830998   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHKeyPath
	I0918 19:38:33.831168   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHKeyPath
	I0918 19:38:33.831367   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHUsername
	I0918 19:38:33.831520   15490 main.go:141] libmachine: Using SSH client type: native
	I0918 19:38:33.831772   15490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I0918 19:38:33.831881   15490 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0918 19:38:33.962393   15490 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0918 19:38:33.962419   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHHostname
	I0918 19:38:33.965483   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:38:33.965876   15490 main.go:141] libmachine: (addons-656419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:5c", ip: ""} in network mk-addons-656419: {Iface:virbr1 ExpiryTime:2024-09-18 20:38:25 +0000 UTC Type:0 Mac:52:54:00:61:13:5c Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-656419 Clientid:01:52:54:00:61:13:5c}
	I0918 19:38:33.965902   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined IP address 192.168.39.154 and MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:38:33.966076   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHPort
	I0918 19:38:33.966286   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHKeyPath
	I0918 19:38:33.966489   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHKeyPath
	I0918 19:38:33.966686   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHUsername
	I0918 19:38:33.966927   15490 main.go:141] libmachine: Using SSH client type: native
	I0918 19:38:33.967110   15490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I0918 19:38:33.967134   15490 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0918 19:38:35.783047   15490 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0918 19:38:35.783081   15490 main.go:141] libmachine: Checking connection to Docker...
	I0918 19:38:35.783098   15490 main.go:141] libmachine: (addons-656419) Calling .GetURL
	I0918 19:38:35.784599   15490 main.go:141] libmachine: (addons-656419) DBG | Using libvirt version 6000000
	I0918 19:38:35.787071   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:38:35.787442   15490 main.go:141] libmachine: (addons-656419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:5c", ip: ""} in network mk-addons-656419: {Iface:virbr1 ExpiryTime:2024-09-18 20:38:25 +0000 UTC Type:0 Mac:52:54:00:61:13:5c Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-656419 Clientid:01:52:54:00:61:13:5c}
	I0918 19:38:35.787470   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined IP address 192.168.39.154 and MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:38:35.787617   15490 main.go:141] libmachine: Docker is up and running!
	I0918 19:38:35.787635   15490 main.go:141] libmachine: Reticulating splines...
	I0918 19:38:35.787645   15490 client.go:171] duration metric: took 27.865089319s to LocalClient.Create
	I0918 19:38:35.787670   15490 start.go:167] duration metric: took 27.865155898s to libmachine.API.Create "addons-656419"
	I0918 19:38:35.787680   15490 start.go:293] postStartSetup for "addons-656419" (driver="kvm2")
	I0918 19:38:35.787690   15490 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0918 19:38:35.787705   15490 main.go:141] libmachine: (addons-656419) Calling .DriverName
	I0918 19:38:35.787931   15490 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0918 19:38:35.787951   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHHostname
	I0918 19:38:35.790379   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:38:35.790819   15490 main.go:141] libmachine: (addons-656419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:5c", ip: ""} in network mk-addons-656419: {Iface:virbr1 ExpiryTime:2024-09-18 20:38:25 +0000 UTC Type:0 Mac:52:54:00:61:13:5c Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-656419 Clientid:01:52:54:00:61:13:5c}
	I0918 19:38:35.790849   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined IP address 192.168.39.154 and MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:38:35.791012   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHPort
	I0918 19:38:35.791210   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHKeyPath
	I0918 19:38:35.791407   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHUsername
	I0918 19:38:35.791530   15490 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7655/.minikube/machines/addons-656419/id_rsa Username:docker}
	I0918 19:38:35.880747   15490 ssh_runner.go:195] Run: cat /etc/os-release
	I0918 19:38:35.885521   15490 info.go:137] Remote host: Buildroot 2023.02.9
	I0918 19:38:35.885551   15490 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7655/.minikube/addons for local assets ...
	I0918 19:38:35.885644   15490 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7655/.minikube/files for local assets ...
	I0918 19:38:35.885671   15490 start.go:296] duration metric: took 97.986286ms for postStartSetup
	I0918 19:38:35.885707   15490 main.go:141] libmachine: (addons-656419) Calling .GetConfigRaw
	I0918 19:38:35.886353   15490 main.go:141] libmachine: (addons-656419) Calling .GetIP
	I0918 19:38:35.889314   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:38:35.889828   15490 main.go:141] libmachine: (addons-656419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:5c", ip: ""} in network mk-addons-656419: {Iface:virbr1 ExpiryTime:2024-09-18 20:38:25 +0000 UTC Type:0 Mac:52:54:00:61:13:5c Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-656419 Clientid:01:52:54:00:61:13:5c}
	I0918 19:38:35.889857   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined IP address 192.168.39.154 and MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:38:35.890199   15490 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/addons-656419/config.json ...
	I0918 19:38:35.890419   15490 start.go:128] duration metric: took 27.98716301s to createHost
	I0918 19:38:35.890444   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHHostname
	I0918 19:38:35.893108   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:38:35.893415   15490 main.go:141] libmachine: (addons-656419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:5c", ip: ""} in network mk-addons-656419: {Iface:virbr1 ExpiryTime:2024-09-18 20:38:25 +0000 UTC Type:0 Mac:52:54:00:61:13:5c Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-656419 Clientid:01:52:54:00:61:13:5c}
	I0918 19:38:35.893446   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined IP address 192.168.39.154 and MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:38:35.893700   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHPort
	I0918 19:38:35.893946   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHKeyPath
	I0918 19:38:35.894221   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHKeyPath
	I0918 19:38:35.894495   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHUsername
	I0918 19:38:35.894746   15490 main.go:141] libmachine: Using SSH client type: native
	I0918 19:38:35.894996   15490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I0918 19:38:35.895008   15490 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0918 19:38:36.010111   15490 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726688315.983274987
	
	I0918 19:38:36.010132   15490 fix.go:216] guest clock: 1726688315.983274987
	I0918 19:38:36.010141   15490 fix.go:229] Guest: 2024-09-18 19:38:35.983274987 +0000 UTC Remote: 2024-09-18 19:38:35.890433028 +0000 UTC m=+28.104828907 (delta=92.841959ms)
	I0918 19:38:36.010165   15490 fix.go:200] guest clock delta is within tolerance: 92.841959ms
	I0918 19:38:36.010170   15490 start.go:83] releasing machines lock for "addons-656419", held for 28.107016928s
	I0918 19:38:36.010192   15490 main.go:141] libmachine: (addons-656419) Calling .DriverName
	I0918 19:38:36.010458   15490 main.go:141] libmachine: (addons-656419) Calling .GetIP
	I0918 19:38:36.014773   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:38:36.015370   15490 main.go:141] libmachine: (addons-656419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:5c", ip: ""} in network mk-addons-656419: {Iface:virbr1 ExpiryTime:2024-09-18 20:38:25 +0000 UTC Type:0 Mac:52:54:00:61:13:5c Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-656419 Clientid:01:52:54:00:61:13:5c}
	I0918 19:38:36.015408   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined IP address 192.168.39.154 and MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:38:36.015781   15490 main.go:141] libmachine: (addons-656419) Calling .DriverName
	I0918 19:38:36.016534   15490 main.go:141] libmachine: (addons-656419) Calling .DriverName
	I0918 19:38:36.016727   15490 main.go:141] libmachine: (addons-656419) Calling .DriverName
	I0918 19:38:36.016827   15490 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0918 19:38:36.016878   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHHostname
	I0918 19:38:36.016958   15490 ssh_runner.go:195] Run: cat /version.json
	I0918 19:38:36.016986   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHHostname
	I0918 19:38:36.019740   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:38:36.019792   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:38:36.020160   15490 main.go:141] libmachine: (addons-656419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:5c", ip: ""} in network mk-addons-656419: {Iface:virbr1 ExpiryTime:2024-09-18 20:38:25 +0000 UTC Type:0 Mac:52:54:00:61:13:5c Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-656419 Clientid:01:52:54:00:61:13:5c}
	I0918 19:38:36.020189   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined IP address 192.168.39.154 and MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:38:36.020222   15490 main.go:141] libmachine: (addons-656419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:5c", ip: ""} in network mk-addons-656419: {Iface:virbr1 ExpiryTime:2024-09-18 20:38:25 +0000 UTC Type:0 Mac:52:54:00:61:13:5c Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-656419 Clientid:01:52:54:00:61:13:5c}
	I0918 19:38:36.020238   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined IP address 192.168.39.154 and MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:38:36.020509   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHPort
	I0918 19:38:36.020537   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHPort
	I0918 19:38:36.020673   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHKeyPath
	I0918 19:38:36.020748   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHKeyPath
	I0918 19:38:36.020820   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHUsername
	I0918 19:38:36.020999   15490 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7655/.minikube/machines/addons-656419/id_rsa Username:docker}
	I0918 19:38:36.021023   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHUsername
	I0918 19:38:36.021158   15490 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7655/.minikube/machines/addons-656419/id_rsa Username:docker}
	I0918 19:38:36.127914   15490 ssh_runner.go:195] Run: systemctl --version
	I0918 19:38:36.134313   15490 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0918 19:38:36.140391   15490 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0918 19:38:36.140457   15490 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0918 19:38:36.158003   15490 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0918 19:38:36.158032   15490 start.go:495] detecting cgroup driver to use...
	I0918 19:38:36.158161   15490 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 19:38:36.177751   15490 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0918 19:38:36.189431   15490 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0918 19:38:36.200620   15490 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0918 19:38:36.200701   15490 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0918 19:38:36.211917   15490 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0918 19:38:36.223170   15490 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0918 19:38:36.234363   15490 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0918 19:38:36.245408   15490 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0918 19:38:36.257203   15490 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0918 19:38:36.269342   15490 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0918 19:38:36.280762   15490 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0918 19:38:36.292394   15490 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0918 19:38:36.303035   15490 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0918 19:38:36.313344   15490 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 19:38:36.434826   15490 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0918 19:38:36.459671   15490 start.go:495] detecting cgroup driver to use...
	I0918 19:38:36.459776   15490 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0918 19:38:36.481789   15490 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0918 19:38:36.497514   15490 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0918 19:38:36.522802   15490 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0918 19:38:36.536851   15490 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0918 19:38:36.550199   15490 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0918 19:38:36.583529   15490 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0918 19:38:36.597636   15490 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 19:38:36.617122   15490 ssh_runner.go:195] Run: which cri-dockerd
	I0918 19:38:36.621355   15490 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0918 19:38:36.631403   15490 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0918 19:38:36.649145   15490 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0918 19:38:36.770123   15490 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0918 19:38:36.897703   15490 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0918 19:38:36.897836   15490 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0918 19:38:36.915280   15490 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 19:38:37.028075   15490 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0918 19:38:40.511484   15490 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.483362635s)
	I0918 19:38:40.511624   15490 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0918 19:38:40.554869   15490 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0918 19:38:40.577929   15490 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0918 19:38:40.701536   15490 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0918 19:38:40.829828   15490 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 19:38:40.944293   15490 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0918 19:38:40.962738   15490 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0918 19:38:40.976738   15490 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 19:38:41.090585   15490 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0918 19:38:41.174787   15490 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0918 19:38:41.174905   15490 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0918 19:38:41.180752   15490 start.go:563] Will wait 60s for crictl version
	I0918 19:38:41.180825   15490 ssh_runner.go:195] Run: which crictl
	I0918 19:38:41.186232   15490 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0918 19:38:41.224360   15490 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0918 19:38:41.224444   15490 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0918 19:38:41.250296   15490 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0918 19:38:41.276078   15490 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0918 19:38:41.276130   15490 main.go:141] libmachine: (addons-656419) Calling .GetIP
	I0918 19:38:41.279217   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:38:41.279583   15490 main.go:141] libmachine: (addons-656419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:5c", ip: ""} in network mk-addons-656419: {Iface:virbr1 ExpiryTime:2024-09-18 20:38:25 +0000 UTC Type:0 Mac:52:54:00:61:13:5c Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-656419 Clientid:01:52:54:00:61:13:5c}
	I0918 19:38:41.279617   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined IP address 192.168.39.154 and MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:38:41.279869   15490 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0918 19:38:41.284159   15490 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 19:38:41.297143   15490 kubeadm.go:883] updating cluster {Name:addons-656419 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-656419 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.154 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0918 19:38:41.297248   15490 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0918 19:38:41.297294   15490 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0918 19:38:41.313930   15490 docker.go:685] Got preloaded images: 
	I0918 19:38:41.313955   15490 docker.go:691] registry.k8s.io/kube-apiserver:v1.31.1 wasn't preloaded
	I0918 19:38:41.314000   15490 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0918 19:38:41.324164   15490 ssh_runner.go:195] Run: which lz4
	I0918 19:38:41.328271   15490 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0918 19:38:41.332416   15490 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0918 19:38:41.332472   15490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7655/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (342028912 bytes)
	I0918 19:38:42.678063   15490 docker.go:649] duration metric: took 1.349827656s to copy over tarball
	I0918 19:38:42.678142   15490 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0918 19:38:45.559769   15490 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.881588713s)
	I0918 19:38:45.559808   15490 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0918 19:38:45.594076   15490 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0918 19:38:45.605690   15490 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0918 19:38:45.623791   15490 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 19:38:45.739853   15490 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0918 19:38:48.242019   15490 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.502120497s)
	I0918 19:38:48.242129   15490 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0918 19:38:48.259977   15490 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0918 19:38:48.260011   15490 cache_images.go:84] Images are preloaded, skipping loading
	I0918 19:38:48.260024   15490 kubeadm.go:934] updating node { 192.168.39.154 8443 v1.31.1 docker true true} ...
	I0918 19:38:48.260156   15490 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-656419 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.154
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-656419 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0918 19:38:48.260216   15490 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0918 19:38:48.309110   15490 cni.go:84] Creating CNI manager for ""
	I0918 19:38:48.309143   15490 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0918 19:38:48.309153   15490 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0918 19:38:48.309171   15490 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.154 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-656419 NodeName:addons-656419 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.154"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.154 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0918 19:38:48.309301   15490 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.154
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-656419"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.154
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.154"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0918 19:38:48.309359   15490 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0918 19:38:48.320492   15490 binaries.go:44] Found k8s binaries, skipping transfer
	I0918 19:38:48.320555   15490 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0918 19:38:48.330964   15490 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0918 19:38:48.348711   15490 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0918 19:38:48.365695   15490 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0918 19:38:48.382978   15490 ssh_runner.go:195] Run: grep 192.168.39.154	control-plane.minikube.internal$ /etc/hosts
	I0918 19:38:48.386718   15490 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.154	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 19:38:48.399331   15490 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 19:38:48.520105   15490 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 19:38:48.542929   15490 certs.go:68] Setting up /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/addons-656419 for IP: 192.168.39.154
	I0918 19:38:48.542954   15490 certs.go:194] generating shared ca certs ...
	I0918 19:38:48.542975   15490 certs.go:226] acquiring lock for ca certs: {Name:mk310d2c853fd6545e75d02a7b137505f120f139 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:38:48.543116   15490 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19667-7655/.minikube/ca.key
	I0918 19:38:48.709552   15490 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19667-7655/.minikube/ca.crt ...
	I0918 19:38:48.709582   15490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7655/.minikube/ca.crt: {Name:mk3a98f7151fe0bee74a4dee39565d6522c1c402 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:38:48.709783   15490 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19667-7655/.minikube/ca.key ...
	I0918 19:38:48.709797   15490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7655/.minikube/ca.key: {Name:mk281b127b4d070cd3328fcb44b8ddc248d754ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:38:48.709907   15490 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19667-7655/.minikube/proxy-client-ca.key
	I0918 19:38:48.841256   15490 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19667-7655/.minikube/proxy-client-ca.crt ...
	I0918 19:38:48.841287   15490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7655/.minikube/proxy-client-ca.crt: {Name:mk3a7c1db61e0d7e41b0e2b4a84ec39eb88c06b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:38:48.841499   15490 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19667-7655/.minikube/proxy-client-ca.key ...
	I0918 19:38:48.841522   15490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7655/.minikube/proxy-client-ca.key: {Name:mkc7032cea839b5d2a58f70f72bca6c4cbcda91b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:38:48.841644   15490 certs.go:256] generating profile certs ...
	I0918 19:38:48.841710   15490 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/addons-656419/client.key
	I0918 19:38:48.841728   15490 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/addons-656419/client.crt with IP's: []
	I0918 19:38:48.956244   15490 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/addons-656419/client.crt ...
	I0918 19:38:48.956274   15490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/addons-656419/client.crt: {Name:mkf7e93f74989078c181ebc7a3cf52cebb7808a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:38:48.956476   15490 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/addons-656419/client.key ...
	I0918 19:38:48.956492   15490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/addons-656419/client.key: {Name:mka48037a4877e459dd23ed1bb95acc40129f1e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:38:48.956597   15490 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/addons-656419/apiserver.key.2c8c24b9
	I0918 19:38:48.956627   15490 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/addons-656419/apiserver.crt.2c8c24b9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.154]
	I0918 19:38:49.164509   15490 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/addons-656419/apiserver.crt.2c8c24b9 ...
	I0918 19:38:49.164538   15490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/addons-656419/apiserver.crt.2c8c24b9: {Name:mkb53fde4d84a0d5d7a9d18a664591c482f429ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:38:49.164726   15490 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/addons-656419/apiserver.key.2c8c24b9 ...
	I0918 19:38:49.164746   15490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/addons-656419/apiserver.key.2c8c24b9: {Name:mk6542ccccc710f0e063947b2c518fb42e385095 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:38:49.164859   15490 certs.go:381] copying /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/addons-656419/apiserver.crt.2c8c24b9 -> /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/addons-656419/apiserver.crt
	I0918 19:38:49.164987   15490 certs.go:385] copying /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/addons-656419/apiserver.key.2c8c24b9 -> /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/addons-656419/apiserver.key
	I0918 19:38:49.165073   15490 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/addons-656419/proxy-client.key
	I0918 19:38:49.165097   15490 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/addons-656419/proxy-client.crt with IP's: []
	I0918 19:38:49.223409   15490 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/addons-656419/proxy-client.crt ...
	I0918 19:38:49.223437   15490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/addons-656419/proxy-client.crt: {Name:mka6e3430120f9e49adf6606faa746b190507e1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:38:49.223685   15490 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/addons-656419/proxy-client.key ...
	I0918 19:38:49.223700   15490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/addons-656419/proxy-client.key: {Name:mkd2388a11c053fc29473d29bb03f64ca7deec2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:38:49.223929   15490 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7655/.minikube/certs/ca-key.pem (1679 bytes)
	I0918 19:38:49.223971   15490 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7655/.minikube/certs/ca.pem (1078 bytes)
	I0918 19:38:49.224013   15490 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7655/.minikube/certs/cert.pem (1123 bytes)
	I0918 19:38:49.224043   15490 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7655/.minikube/certs/key.pem (1679 bytes)
	I0918 19:38:49.224694   15490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7655/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0918 19:38:49.258238   15490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7655/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0918 19:38:49.287037   15490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7655/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0918 19:38:49.311882   15490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7655/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I0918 19:38:49.336958   15490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/addons-656419/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0918 19:38:49.361204   15490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/addons-656419/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0918 19:38:49.385396   15490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/addons-656419/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0918 19:38:49.410662   15490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/addons-656419/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0918 19:38:49.436162   15490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7655/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0918 19:38:49.461001   15490 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0918 19:38:49.478248   15490 ssh_runner.go:195] Run: openssl version
	I0918 19:38:49.484362   15490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0918 19:38:49.495729   15490 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0918 19:38:49.500408   15490 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 18 19:38 /usr/share/ca-certificates/minikubeCA.pem
	I0918 19:38:49.500474   15490 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0918 19:38:49.506459   15490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0918 19:38:49.518050   15490 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0918 19:38:49.522470   15490 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0918 19:38:49.522516   15490 kubeadm.go:392] StartCluster: {Name:addons-656419 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-656419 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.154 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 19:38:49.522618   15490 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0918 19:38:49.540650   15490 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0918 19:38:49.551365   15490 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0918 19:38:49.561524   15490 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 19:38:49.571542   15490 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 19:38:49.571563   15490 kubeadm.go:157] found existing configuration files:
	
	I0918 19:38:49.571616   15490 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0918 19:38:49.581208   15490 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0918 19:38:49.581266   15490 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0918 19:38:49.591350   15490 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0918 19:38:49.600784   15490 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0918 19:38:49.600861   15490 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0918 19:38:49.610752   15490 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0918 19:38:49.620216   15490 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0918 19:38:49.620311   15490 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0918 19:38:49.630287   15490 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0918 19:38:49.639752   15490 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0918 19:38:49.639828   15490 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0918 19:38:49.649779   15490 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0918 19:38:49.690431   15490 kubeadm.go:310] W0918 19:38:49.664205    1507 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0918 19:38:49.691162   15490 kubeadm.go:310] W0918 19:38:49.665181    1507 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0918 19:38:49.789426   15490 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0918 19:39:00.866096   15490 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0918 19:39:00.866168   15490 kubeadm.go:310] [preflight] Running pre-flight checks
	I0918 19:39:00.866251   15490 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0918 19:39:00.866363   15490 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0918 19:39:00.866462   15490 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0918 19:39:00.866517   15490 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0918 19:39:00.868292   15490 out.go:235]   - Generating certificates and keys ...
	I0918 19:39:00.868382   15490 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0918 19:39:00.868444   15490 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0918 19:39:00.868511   15490 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0918 19:39:00.868577   15490 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0918 19:39:00.868647   15490 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0918 19:39:00.868701   15490 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0918 19:39:00.868749   15490 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0918 19:39:00.868849   15490 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-656419 localhost] and IPs [192.168.39.154 127.0.0.1 ::1]
	I0918 19:39:00.868897   15490 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0918 19:39:00.869022   15490 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-656419 localhost] and IPs [192.168.39.154 127.0.0.1 ::1]
	I0918 19:39:00.869113   15490 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0918 19:39:00.869177   15490 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0918 19:39:00.869231   15490 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0918 19:39:00.869279   15490 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0918 19:39:00.869367   15490 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0918 19:39:00.869420   15490 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0918 19:39:00.869475   15490 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0918 19:39:00.869553   15490 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0918 19:39:00.869604   15490 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0918 19:39:00.869674   15490 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0918 19:39:00.869732   15490 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0918 19:39:00.871505   15490 out.go:235]   - Booting up control plane ...
	I0918 19:39:00.871610   15490 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0918 19:39:00.871731   15490 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0918 19:39:00.871825   15490 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0918 19:39:00.871920   15490 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0918 19:39:00.872102   15490 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0918 19:39:00.872172   15490 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0918 19:39:00.872370   15490 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0918 19:39:00.872458   15490 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0918 19:39:00.872509   15490 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.643891ms
	I0918 19:39:00.872586   15490 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0918 19:39:00.872651   15490 kubeadm.go:310] [api-check] The API server is healthy after 5.501468596s
	I0918 19:39:00.872797   15490 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0918 19:39:00.873003   15490 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0918 19:39:00.873075   15490 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0918 19:39:00.873261   15490 kubeadm.go:310] [mark-control-plane] Marking the node addons-656419 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0918 19:39:00.873320   15490 kubeadm.go:310] [bootstrap-token] Using token: wogdo2.j8lh2fdewe9ig9qb
	I0918 19:39:00.875120   15490 out.go:235]   - Configuring RBAC rules ...
	I0918 19:39:00.875238   15490 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0918 19:39:00.875322   15490 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0918 19:39:00.875461   15490 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0918 19:39:00.875585   15490 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0918 19:39:00.875689   15490 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0918 19:39:00.875761   15490 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0918 19:39:00.875924   15490 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0918 19:39:00.875966   15490 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0918 19:39:00.876032   15490 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0918 19:39:00.876044   15490 kubeadm.go:310] 
	I0918 19:39:00.876136   15490 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0918 19:39:00.876154   15490 kubeadm.go:310] 
	I0918 19:39:00.876234   15490 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0918 19:39:00.876241   15490 kubeadm.go:310] 
	I0918 19:39:00.876264   15490 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0918 19:39:00.876322   15490 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0918 19:39:00.876366   15490 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0918 19:39:00.876372   15490 kubeadm.go:310] 
	I0918 19:39:00.876424   15490 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0918 19:39:00.876437   15490 kubeadm.go:310] 
	I0918 19:39:00.876483   15490 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0918 19:39:00.876487   15490 kubeadm.go:310] 
	I0918 19:39:00.876534   15490 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0918 19:39:00.876616   15490 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0918 19:39:00.876701   15490 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0918 19:39:00.876712   15490 kubeadm.go:310] 
	I0918 19:39:00.876886   15490 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0918 19:39:00.877146   15490 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0918 19:39:00.877167   15490 kubeadm.go:310] 
	I0918 19:39:00.877275   15490 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token wogdo2.j8lh2fdewe9ig9qb \
	I0918 19:39:00.877374   15490 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4b80b69bd62ea10a0c970331c6fe6fb2bbf80336fd8a6d6479e3ae2b7c926678 \
	I0918 19:39:00.877395   15490 kubeadm.go:310] 	--control-plane 
	I0918 19:39:00.877400   15490 kubeadm.go:310] 
	I0918 19:39:00.877483   15490 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0918 19:39:00.877493   15490 kubeadm.go:310] 
	I0918 19:39:00.877568   15490 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token wogdo2.j8lh2fdewe9ig9qb \
	I0918 19:39:00.877669   15490 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4b80b69bd62ea10a0c970331c6fe6fb2bbf80336fd8a6d6479e3ae2b7c926678 
	I0918 19:39:00.877678   15490 cni.go:84] Creating CNI manager for ""
	I0918 19:39:00.877691   15490 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0918 19:39:00.879410   15490 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0918 19:39:00.880715   15490 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0918 19:39:00.891997   15490 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0918 19:39:00.912139   15490 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0918 19:39:00.912220   15490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:39:00.912326   15490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-656419 minikube.k8s.io/updated_at=2024_09_18T19_39_00_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91 minikube.k8s.io/name=addons-656419 minikube.k8s.io/primary=true
	I0918 19:39:00.933235   15490 ops.go:34] apiserver oom_adj: -16
	I0918 19:39:01.056563   15490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:39:01.557029   15490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:39:02.056722   15490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:39:02.556977   15490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:39:03.057647   15490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:39:03.557426   15490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:39:04.056952   15490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:39:04.556863   15490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:39:05.056959   15490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:39:05.186247   15490 kubeadm.go:1113] duration metric: took 4.274095618s to wait for elevateKubeSystemPrivileges
	I0918 19:39:05.186286   15490 kubeadm.go:394] duration metric: took 15.663772189s to StartCluster
	I0918 19:39:05.186309   15490 settings.go:142] acquiring lock: {Name:mk36c0de1e2a43cecbcf652a694a5ff137b22e77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:39:05.186453   15490 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19667-7655/kubeconfig
	I0918 19:39:05.186947   15490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7655/kubeconfig: {Name:mk4ff50cfe7b72f23c16e4edf19f20767c1dc285 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:39:05.187252   15490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0918 19:39:05.187277   15490 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.154 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 19:39:05.187322   15490 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0918 19:39:05.187440   15490 addons.go:69] Setting yakd=true in profile "addons-656419"
	I0918 19:39:05.187454   15490 addons.go:69] Setting default-storageclass=true in profile "addons-656419"
	I0918 19:39:05.187462   15490 addons.go:234] Setting addon yakd=true in "addons-656419"
	I0918 19:39:05.187475   15490 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-656419"
	I0918 19:39:05.187462   15490 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-656419"
	I0918 19:39:05.187497   15490 host.go:66] Checking if "addons-656419" exists ...
	I0918 19:39:05.187485   15490 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-656419"
	I0918 19:39:05.187512   15490 config.go:182] Loaded profile config "addons-656419": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 19:39:05.187534   15490 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-656419"
	I0918 19:39:05.187544   15490 addons.go:69] Setting ingress=true in profile "addons-656419"
	I0918 19:39:05.187515   15490 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-656419"
	I0918 19:39:05.187554   15490 addons.go:234] Setting addon ingress=true in "addons-656419"
	I0918 19:39:05.187572   15490 host.go:66] Checking if "addons-656419" exists ...
	I0918 19:39:05.187572   15490 host.go:66] Checking if "addons-656419" exists ...
	I0918 19:39:05.187591   15490 host.go:66] Checking if "addons-656419" exists ...
	I0918 19:39:05.187918   15490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0918 19:39:05.187946   15490 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-656419"
	I0918 19:39:05.187959   15490 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-656419"
	I0918 19:39:05.187971   15490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:05.187536   15490 addons.go:69] Setting helm-tiller=true in profile "addons-656419"
	I0918 19:39:05.188042   15490 addons.go:69] Setting registry=true in profile "addons-656419"
	I0918 19:39:05.188040   15490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0918 19:39:05.188054   15490 addons.go:234] Setting addon helm-tiller=true in "addons-656419"
	I0918 19:39:05.187927   15490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0918 19:39:05.188070   15490 addons.go:234] Setting addon registry=true in "addons-656419"
	I0918 19:39:05.188079   15490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:05.188091   15490 host.go:66] Checking if "addons-656419" exists ...
	I0918 19:39:05.188126   15490 addons.go:69] Setting cloud-spanner=true in profile "addons-656419"
	I0918 19:39:05.188154   15490 addons.go:69] Setting inspektor-gadget=true in profile "addons-656419"
	I0918 19:39:05.188165   15490 addons.go:234] Setting addon cloud-spanner=true in "addons-656419"
	I0918 19:39:05.188169   15490 addons.go:234] Setting addon inspektor-gadget=true in "addons-656419"
	I0918 19:39:05.188177   15490 addons.go:69] Setting metrics-server=true in profile "addons-656419"
	I0918 19:39:05.188196   15490 addons.go:234] Setting addon metrics-server=true in "addons-656419"
	I0918 19:39:05.188137   15490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0918 19:39:05.188251   15490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:05.188343   15490 addons.go:69] Setting volumesnapshots=true in profile "addons-656419"
	I0918 19:39:05.188359   15490 addons.go:234] Setting addon volumesnapshots=true in "addons-656419"
	I0918 19:39:05.188368   15490 addons.go:69] Setting volcano=true in profile "addons-656419"
	I0918 19:39:05.188391   15490 addons.go:234] Setting addon volcano=true in "addons-656419"
	I0918 19:39:05.188146   15490 addons.go:69] Setting ingress-dns=true in profile "addons-656419"
	I0918 19:39:05.188406   15490 addons.go:234] Setting addon ingress-dns=true in "addons-656419"
	I0918 19:39:05.187441   15490 addons.go:69] Setting gcp-auth=true in profile "addons-656419"
	I0918 19:39:05.187925   15490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0918 19:39:05.188454   15490 mustload.go:65] Loading cluster: addons-656419
	I0918 19:39:05.188465   15490 addons.go:69] Setting storage-provisioner=true in profile "addons-656419"
	I0918 19:39:05.188473   15490 addons.go:234] Setting addon storage-provisioner=true in "addons-656419"
	I0918 19:39:05.188478   15490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:05.188520   15490 host.go:66] Checking if "addons-656419" exists ...
	I0918 19:39:05.188599   15490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:05.188626   15490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0918 19:39:05.188657   15490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:05.188714   15490 host.go:66] Checking if "addons-656419" exists ...
	I0918 19:39:05.188842   15490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0918 19:39:05.188892   15490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:05.189174   15490 host.go:66] Checking if "addons-656419" exists ...
	I0918 19:39:05.189360   15490 host.go:66] Checking if "addons-656419" exists ...
	I0918 19:39:05.189448   15490 host.go:66] Checking if "addons-656419" exists ...
	I0918 19:39:05.189666   15490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0918 19:39:05.189716   15490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:05.189755   15490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0918 19:39:05.189790   15490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:05.189810   15490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0918 19:39:05.189731   15490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0918 19:39:05.189849   15490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:05.189876   15490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0918 19:39:05.189855   15490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:05.189902   15490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:05.189950   15490 host.go:66] Checking if "addons-656419" exists ...
	I0918 19:39:05.190107   15490 config.go:182] Loaded profile config "addons-656419": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 19:39:05.190329   15490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0918 19:39:05.190361   15490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:05.190436   15490 host.go:66] Checking if "addons-656419" exists ...
	I0918 19:39:05.190540   15490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0918 19:39:05.190600   15490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:05.189897   15490 host.go:66] Checking if "addons-656419" exists ...
	I0918 19:39:05.190777   15490 out.go:177] * Verifying Kubernetes components...
	I0918 19:39:05.193036   15490 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 19:39:05.209548   15490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44817
	I0918 19:39:05.213035   15490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37875
	I0918 19:39:05.213120   15490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33213
	I0918 19:39:05.213039   15490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45795
	I0918 19:39:05.229741   15490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0918 19:39:05.229789   15490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:05.230098   15490 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:05.230246   15490 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:05.230487   15490 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:05.230909   15490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0918 19:39:05.230944   15490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:05.231068   15490 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:05.231271   15490 main.go:141] libmachine: Using API Version  1
	I0918 19:39:05.231285   15490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:05.231339   15490 main.go:141] libmachine: Using API Version  1
	I0918 19:39:05.231362   15490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:05.231712   15490 main.go:141] libmachine: Using API Version  1
	I0918 19:39:05.231740   15490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:05.231810   15490 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:05.231957   15490 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:05.232420   15490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0918 19:39:05.232444   15490 main.go:141] libmachine: Using API Version  1
	I0918 19:39:05.232458   15490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:05.232460   15490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:05.232953   15490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0918 19:39:05.232991   15490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:05.233476   15490 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:05.233536   15490 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:05.234474   15490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0918 19:39:05.234518   15490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:05.236772   15490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0918 19:39:05.236820   15490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:05.237101   15490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46789
	I0918 19:39:05.237571   15490 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:05.238724   15490 main.go:141] libmachine: Using API Version  1
	I0918 19:39:05.238743   15490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:05.239241   15490 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:05.240026   15490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0918 19:39:05.240079   15490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:05.258682   15490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44381
	I0918 19:39:05.259538   15490 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:05.260221   15490 main.go:141] libmachine: Using API Version  1
	I0918 19:39:05.260251   15490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:05.260705   15490 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:05.260955   15490 main.go:141] libmachine: (addons-656419) Calling .GetState
	I0918 19:39:05.263120   15490 host.go:66] Checking if "addons-656419" exists ...
	I0918 19:39:05.263563   15490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0918 19:39:05.263608   15490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:05.271024   15490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42047
	I0918 19:39:05.271684   15490 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:05.272400   15490 main.go:141] libmachine: Using API Version  1
	I0918 19:39:05.272423   15490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:05.273133   15490 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:05.273954   15490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0918 19:39:05.274007   15490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:05.276524   15490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39585
	I0918 19:39:05.280293   15490 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:05.280614   15490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44039
	I0918 19:39:05.281030   15490 main.go:141] libmachine: Using API Version  1
	I0918 19:39:05.281051   15490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:05.281671   15490 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:05.281743   15490 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:05.281956   15490 main.go:141] libmachine: (addons-656419) Calling .GetState
	I0918 19:39:05.282842   15490 main.go:141] libmachine: Using API Version  1
	I0918 19:39:05.282861   15490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:05.284760   15490 main.go:141] libmachine: (addons-656419) Calling .DriverName
	I0918 19:39:05.285332   15490 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:05.285947   15490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0918 19:39:05.285994   15490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:05.287413   15490 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0918 19:39:05.289424   15490 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0918 19:39:05.289465   15490 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0918 19:39:05.289491   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHHostname
	I0918 19:39:05.291906   15490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36223
	I0918 19:39:05.292590   15490 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:05.293220   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:39:05.293715   15490 main.go:141] libmachine: Using API Version  1
	I0918 19:39:05.293735   15490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:05.294310   15490 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:05.294394   15490 main.go:141] libmachine: (addons-656419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:5c", ip: ""} in network mk-addons-656419: {Iface:virbr1 ExpiryTime:2024-09-18 20:38:25 +0000 UTC Type:0 Mac:52:54:00:61:13:5c Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-656419 Clientid:01:52:54:00:61:13:5c}
	I0918 19:39:05.294409   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined IP address 192.168.39.154 and MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:39:05.294633   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHPort
	I0918 19:39:05.294815   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHKeyPath
	I0918 19:39:05.294943   15490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0918 19:39:05.294992   15490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:05.294951   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHUsername
	I0918 19:39:05.295181   15490 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7655/.minikube/machines/addons-656419/id_rsa Username:docker}
	I0918 19:39:05.303670   15490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45337
	I0918 19:39:05.304625   15490 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:05.305600   15490 main.go:141] libmachine: Using API Version  1
	I0918 19:39:05.305627   15490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:05.306136   15490 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:05.306623   15490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36269
	I0918 19:39:05.306697   15490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32881
	I0918 19:39:05.307572   15490 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:05.307744   15490 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:05.307841   15490 main.go:141] libmachine: (addons-656419) Calling .GetState
	I0918 19:39:05.310416   15490 main.go:141] libmachine: (addons-656419) Calling .DriverName
	I0918 19:39:05.310512   15490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33831
	I0918 19:39:05.310542   15490 main.go:141] libmachine: Using API Version  1
	I0918 19:39:05.310562   15490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:05.310714   15490 main.go:141] libmachine: Using API Version  1
	I0918 19:39:05.310725   15490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:05.311144   15490 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:05.311275   15490 main.go:141] libmachine: (addons-656419) Calling .DriverName
	I0918 19:39:05.311538   15490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38733
	I0918 19:39:05.311658   15490 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:05.312446   15490 main.go:141] libmachine: Using API Version  1
	I0918 19:39:05.312465   15490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:05.312524   15490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35061
	I0918 19:39:05.312627   15490 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:05.312749   15490 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:05.313034   15490 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0918 19:39:05.313380   15490 main.go:141] libmachine: Using API Version  1
	I0918 19:39:05.313397   15490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:05.313832   15490 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:05.314331   15490 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:05.314390   15490 main.go:141] libmachine: (addons-656419) Calling .GetState
	I0918 19:39:05.314725   15490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46883
	I0918 19:39:05.314845   15490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39405
	I0918 19:39:05.315000   15490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39165
	I0918 19:39:05.315142   15490 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:05.315500   15490 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:05.315654   15490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0918 19:39:05.315695   15490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:05.315960   15490 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:05.315968   15490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0918 19:39:05.316001   15490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:05.316473   15490 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0918 19:39:05.316987   15490 main.go:141] libmachine: Using API Version  1
	I0918 19:39:05.317017   15490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:05.317054   15490 main.go:141] libmachine: (addons-656419) Calling .DriverName
	I0918 19:39:05.317178   15490 main.go:141] libmachine: Using API Version  1
	I0918 19:39:05.317193   15490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:05.318765   15490 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:05.318806   15490 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:05.319532   15490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0918 19:39:05.319543   15490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45115
	I0918 19:39:05.319571   15490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:05.319720   15490 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:05.319017   15490 main.go:141] libmachine: Using API Version  1
	I0918 19:39:05.319789   15490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:05.320022   15490 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0918 19:39:05.320204   15490 main.go:141] libmachine: (addons-656419) Calling .GetState
	I0918 19:39:05.320287   15490 main.go:141] libmachine: Using API Version  1
	I0918 19:39:05.320329   15490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:05.320499   15490 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:05.321125   15490 main.go:141] libmachine: Using API Version  1
	I0918 19:39:05.321146   15490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:05.321230   15490 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:05.321714   15490 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:05.321735   15490 main.go:141] libmachine: (addons-656419) Calling .GetState
	I0918 19:39:05.321894   15490 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0918 19:39:05.321914   15490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0918 19:39:05.321932   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHHostname
	I0918 19:39:05.322004   15490 out.go:177]   - Using image docker.io/registry:2.8.3
	I0918 19:39:05.322399   15490 main.go:141] libmachine: (addons-656419) Calling .GetState
	I0918 19:39:05.325424   15490 main.go:141] libmachine: (addons-656419) Calling .DriverName
	I0918 19:39:05.325533   15490 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0918 19:39:05.326786   15490 main.go:141] libmachine: (addons-656419) Calling .DriverName
	I0918 19:39:05.326808   15490 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:05.327083   15490 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0918 19:39:05.327370   15490 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0918 19:39:05.327386   15490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0918 19:39:05.327408   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHHostname
	I0918 19:39:05.336187   15490 addons.go:234] Setting addon default-storageclass=true in "addons-656419"
	I0918 19:39:05.336254   15490 host.go:66] Checking if "addons-656419" exists ...
	I0918 19:39:05.336754   15490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0918 19:39:05.344998   15490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:05.345061   15490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0918 19:39:05.345243   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHPort
	I0918 19:39:05.345261   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHPort
	I0918 19:39:05.345393   15490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40417
	I0918 19:39:05.345208   15490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43133
	I0918 19:39:05.345410   15490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45991
	I0918 19:39:05.345398   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:39:05.345537   15490 main.go:141] libmachine: (addons-656419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:5c", ip: ""} in network mk-addons-656419: {Iface:virbr1 ExpiryTime:2024-09-18 20:38:25 +0000 UTC Type:0 Mac:52:54:00:61:13:5c Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-656419 Clientid:01:52:54:00:61:13:5c}
	I0918 19:39:05.345562   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined IP address 192.168.39.154 and MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:39:05.345608   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHKeyPath
	I0918 19:39:05.345608   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHKeyPath
	I0918 19:39:05.345382   15490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:05.345884   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHUsername
	I0918 19:39:05.345966   15490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36239
	I0918 19:39:05.346156   15490 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7655/.minikube/machines/addons-656419/id_rsa Username:docker}
	I0918 19:39:05.345655   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:39:05.346290   15490 main.go:141] libmachine: (addons-656419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:5c", ip: ""} in network mk-addons-656419: {Iface:virbr1 ExpiryTime:2024-09-18 20:38:25 +0000 UTC Type:0 Mac:52:54:00:61:13:5c Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-656419 Clientid:01:52:54:00:61:13:5c}
	I0918 19:39:05.346333   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined IP address 192.168.39.154 and MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:39:05.346386   15490 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:05.346393   15490 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:05.346504   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHUsername
	I0918 19:39:05.346605   15490 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:05.346627   15490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33869
	I0918 19:39:05.346657   15490 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:05.346809   15490 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7655/.minikube/machines/addons-656419/id_rsa Username:docker}
	I0918 19:39:05.347196   15490 main.go:141] libmachine: Using API Version  1
	I0918 19:39:05.347830   15490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:05.347375   15490 main.go:141] libmachine: Using API Version  1
	I0918 19:39:05.347896   15490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:05.347454   15490 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0918 19:39:05.348146   15490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32771
	I0918 19:39:05.348210   15490 main.go:141] libmachine: Using API Version  1
	I0918 19:39:05.348249   15490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:05.348325   15490 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:05.348429   15490 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:05.347533   15490 main.go:141] libmachine: Using API Version  1
	I0918 19:39:05.348466   15490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:05.347555   15490 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0918 19:39:05.348596   15490 main.go:141] libmachine: (addons-656419) Calling .GetState
	I0918 19:39:05.349062   15490 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:05.349171   15490 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:05.349668   15490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39057
	I0918 19:39:05.349827   15490 main.go:141] libmachine: (addons-656419) Calling .GetState
	I0918 19:39:05.349840   15490 main.go:141] libmachine: Using API Version  1
	I0918 19:39:05.349856   15490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:05.349880   15490 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:05.350113   15490 main.go:141] libmachine: (addons-656419) Calling .GetState
	I0918 19:39:05.350198   15490 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:05.350271   15490 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0918 19:39:05.350325   15490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0918 19:39:05.350330   15490 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:05.350344   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHHostname
	I0918 19:39:05.350503   15490 main.go:141] libmachine: (addons-656419) Calling .GetState
	I0918 19:39:05.351252   15490 main.go:141] libmachine: Using API Version  1
	I0918 19:39:05.351270   15490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:05.351830   15490 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0918 19:39:05.352213   15490 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:05.352285   15490 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:05.354270   15490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0918 19:39:05.354302   15490 main.go:141] libmachine: (addons-656419) Calling .GetState
	I0918 19:39:05.354326   15490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:05.354744   15490 main.go:141] libmachine: (addons-656419) Calling .DriverName
	I0918 19:39:05.354903   15490 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0918 19:39:05.356234   15490 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0918 19:39:05.356563   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:39:05.356606   15490 main.go:141] libmachine: (addons-656419) Calling .DriverName
	I0918 19:39:05.357389   15490 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-656419"
	I0918 19:39:05.357438   15490 host.go:66] Checking if "addons-656419" exists ...
	I0918 19:39:05.357682   15490 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0918 19:39:05.357741   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHPort
	I0918 19:39:05.357772   15490 main.go:141] libmachine: (addons-656419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:5c", ip: ""} in network mk-addons-656419: {Iface:virbr1 ExpiryTime:2024-09-18 20:38:25 +0000 UTC Type:0 Mac:52:54:00:61:13:5c Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-656419 Clientid:01:52:54:00:61:13:5c}
	I0918 19:39:05.358455   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined IP address 192.168.39.154 and MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:39:05.357810   15490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0918 19:39:05.358484   15490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:05.358614   15490 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0918 19:39:05.359012   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHKeyPath
	I0918 19:39:05.358853   15490 main.go:141] libmachine: (addons-656419) Calling .DriverName
	I0918 19:39:05.359178   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHUsername
	I0918 19:39:05.359407   15490 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7655/.minikube/machines/addons-656419/id_rsa Username:docker}
	I0918 19:39:05.359731   15490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41847
	I0918 19:39:05.360238   15490 main.go:141] libmachine: (addons-656419) Calling .DriverName
	I0918 19:39:05.360633   15490 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:05.360728   15490 main.go:141] libmachine: Using API Version  1
	I0918 19:39:05.360744   15490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:05.360780   15490 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0918 19:39:05.360795   15490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0918 19:39:05.360814   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHHostname
	I0918 19:39:05.360879   15490 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0918 19:39:05.361219   15490 main.go:141] libmachine: Using API Version  1
	I0918 19:39:05.361235   15490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:05.361728   15490 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0918 19:39:05.361805   15490 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0918 19:39:05.362540   15490 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:05.362617   15490 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:05.362777   15490 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0918 19:39:05.362780   15490 main.go:141] libmachine: (addons-656419) Calling .GetState
	I0918 19:39:05.363944   15490 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0918 19:39:05.363971   15490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0918 19:39:05.363991   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHHostname
	I0918 19:39:05.362980   15490 main.go:141] libmachine: (addons-656419) Calling .GetState
	I0918 19:39:05.365534   15490 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0918 19:39:05.365660   15490 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0918 19:39:05.365713   15490 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0918 19:39:05.365733   15490 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0918 19:39:05.365753   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHHostname
	I0918 19:39:05.367201   15490 main.go:141] libmachine: (addons-656419) Calling .DriverName
	I0918 19:39:05.368498   15490 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0918 19:39:05.368520   15490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0918 19:39:05.368538   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHHostname
	I0918 19:39:05.368547   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:39:05.368865   15490 main.go:141] libmachine: (addons-656419) Calling .DriverName
	I0918 19:39:05.369140   15490 main.go:141] libmachine: (addons-656419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:5c", ip: ""} in network mk-addons-656419: {Iface:virbr1 ExpiryTime:2024-09-18 20:38:25 +0000 UTC Type:0 Mac:52:54:00:61:13:5c Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-656419 Clientid:01:52:54:00:61:13:5c}
	I0918 19:39:05.369238   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHPort
	I0918 19:39:05.369267   15490 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0918 19:39:05.369339   15490 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0918 19:39:05.369379   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined IP address 192.168.39.154 and MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:39:05.369420   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHKeyPath
	I0918 19:39:05.369841   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHUsername
	I0918 19:39:05.369952   15490 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7655/.minikube/machines/addons-656419/id_rsa Username:docker}
	I0918 19:39:05.371373   15490 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0918 19:39:05.371391   15490 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0918 19:39:05.371410   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHHostname
	I0918 19:39:05.371445   15490 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0918 19:39:05.371453   15490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0918 19:39:05.371466   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHHostname
	I0918 19:39:05.371511   15490 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 19:39:05.371943   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:39:05.371982   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:39:05.372634   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:39:05.373196   15490 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 19:39:05.373213   15490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0918 19:39:05.373230   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHHostname
	I0918 19:39:05.373388   15490 main.go:141] libmachine: (addons-656419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:5c", ip: ""} in network mk-addons-656419: {Iface:virbr1 ExpiryTime:2024-09-18 20:38:25 +0000 UTC Type:0 Mac:52:54:00:61:13:5c Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-656419 Clientid:01:52:54:00:61:13:5c}
	I0918 19:39:05.373592   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined IP address 192.168.39.154 and MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:39:05.373900   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHPort
	I0918 19:39:05.374410   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHKeyPath
	I0918 19:39:05.374821   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHUsername
	I0918 19:39:05.375345   15490 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7655/.minikube/machines/addons-656419/id_rsa Username:docker}
	I0918 19:39:05.375736   15490 main.go:141] libmachine: (addons-656419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:5c", ip: ""} in network mk-addons-656419: {Iface:virbr1 ExpiryTime:2024-09-18 20:38:25 +0000 UTC Type:0 Mac:52:54:00:61:13:5c Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-656419 Clientid:01:52:54:00:61:13:5c}
	I0918 19:39:05.376425   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:39:05.376529   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined IP address 192.168.39.154 and MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:39:05.376963   15490 main.go:141] libmachine: (addons-656419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:5c", ip: ""} in network mk-addons-656419: {Iface:virbr1 ExpiryTime:2024-09-18 20:38:25 +0000 UTC Type:0 Mac:52:54:00:61:13:5c Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-656419 Clientid:01:52:54:00:61:13:5c}
	I0918 19:39:05.376983   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined IP address 192.168.39.154 and MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:39:05.377184   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHPort
	I0918 19:39:05.377354   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHKeyPath
	I0918 19:39:05.377505   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHUsername
	I0918 19:39:05.377640   15490 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7655/.minikube/machines/addons-656419/id_rsa Username:docker}
	I0918 19:39:05.377850   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:39:05.378299   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:39:05.378817   15490 main.go:141] libmachine: (addons-656419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:5c", ip: ""} in network mk-addons-656419: {Iface:virbr1 ExpiryTime:2024-09-18 20:38:25 +0000 UTC Type:0 Mac:52:54:00:61:13:5c Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-656419 Clientid:01:52:54:00:61:13:5c}
	I0918 19:39:05.378845   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined IP address 192.168.39.154 and MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:39:05.379193   15490 main.go:141] libmachine: (addons-656419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:5c", ip: ""} in network mk-addons-656419: {Iface:virbr1 ExpiryTime:2024-09-18 20:38:25 +0000 UTC Type:0 Mac:52:54:00:61:13:5c Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-656419 Clientid:01:52:54:00:61:13:5c}
	I0918 19:39:05.379283   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined IP address 192.168.39.154 and MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:39:05.379507   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHPort
	I0918 19:39:05.379580   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHPort
	I0918 19:39:05.379702   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHKeyPath
	I0918 19:39:05.379735   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHKeyPath
	I0918 19:39:05.379815   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHUsername
	I0918 19:39:05.379856   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHUsername
	I0918 19:39:05.379898   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHPort
	I0918 19:39:05.379944   15490 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7655/.minikube/machines/addons-656419/id_rsa Username:docker}
	I0918 19:39:05.380301   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHKeyPath
	I0918 19:39:05.380307   15490 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7655/.minikube/machines/addons-656419/id_rsa Username:docker}
	I0918 19:39:05.380315   15490 main.go:141] libmachine: (addons-656419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:5c", ip: ""} in network mk-addons-656419: {Iface:virbr1 ExpiryTime:2024-09-18 20:38:25 +0000 UTC Type:0 Mac:52:54:00:61:13:5c Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-656419 Clientid:01:52:54:00:61:13:5c}
	I0918 19:39:05.380333   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined IP address 192.168.39.154 and MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:39:05.380356   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHPort
	I0918 19:39:05.380514   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHKeyPath
	I0918 19:39:05.380561   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHUsername
	I0918 19:39:05.380707   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHUsername
	I0918 19:39:05.380724   15490 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7655/.minikube/machines/addons-656419/id_rsa Username:docker}
	I0918 19:39:05.380878   15490 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7655/.minikube/machines/addons-656419/id_rsa Username:docker}
	I0918 19:39:05.383297   15490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39201
	I0918 19:39:05.383714   15490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42347
	I0918 19:39:05.383949   15490 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:05.384632   15490 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:05.384759   15490 main.go:141] libmachine: Using API Version  1
	I0918 19:39:05.384774   15490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:05.385328   15490 main.go:141] libmachine: Using API Version  1
	I0918 19:39:05.385350   15490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:05.385417   15490 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:05.385815   15490 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:05.386016   15490 main.go:141] libmachine: (addons-656419) Calling .GetState
	I0918 19:39:05.386201   15490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0918 19:39:05.386248   15490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:05.387920   15490 main.go:141] libmachine: (addons-656419) Calling .DriverName
	I0918 19:39:05.389005   15490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39295
	I0918 19:39:05.390032   15490 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:05.390230   15490 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0918 19:39:05.390760   15490 main.go:141] libmachine: Using API Version  1
	I0918 19:39:05.390785   15490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:05.391557   15490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45713
	I0918 19:39:05.391604   15490 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:05.392099   15490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0918 19:39:05.392127   15490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:05.392346   15490 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:05.392377   15490 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0918 19:39:05.392394   15490 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0918 19:39:05.392414   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHHostname
	I0918 19:39:05.393479   15490 main.go:141] libmachine: Using API Version  1
	I0918 19:39:05.393504   15490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:05.394007   15490 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:05.394353   15490 main.go:141] libmachine: (addons-656419) Calling .GetState
	I0918 19:39:05.396231   15490 main.go:141] libmachine: (addons-656419) Calling .DriverName
	I0918 19:39:05.396403   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:39:05.396828   15490 main.go:141] libmachine: (addons-656419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:5c", ip: ""} in network mk-addons-656419: {Iface:virbr1 ExpiryTime:2024-09-18 20:38:25 +0000 UTC Type:0 Mac:52:54:00:61:13:5c Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-656419 Clientid:01:52:54:00:61:13:5c}
	I0918 19:39:05.396880   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined IP address 192.168.39.154 and MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:39:05.397048   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHPort
	I0918 19:39:05.397202   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHKeyPath
	I0918 19:39:05.397319   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHUsername
	I0918 19:39:05.397429   15490 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7655/.minikube/machines/addons-656419/id_rsa Username:docker}
	I0918 19:39:05.398741   15490 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0918 19:39:05.400360   15490 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0918 19:39:05.400383   15490 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0918 19:39:05.400437   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHHostname
	I0918 19:39:05.403792   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:39:05.404239   15490 main.go:141] libmachine: (addons-656419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:5c", ip: ""} in network mk-addons-656419: {Iface:virbr1 ExpiryTime:2024-09-18 20:38:25 +0000 UTC Type:0 Mac:52:54:00:61:13:5c Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-656419 Clientid:01:52:54:00:61:13:5c}
	I0918 19:39:05.404270   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined IP address 192.168.39.154 and MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:39:05.404549   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHPort
	I0918 19:39:05.404745   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHKeyPath
	I0918 19:39:05.404932   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHUsername
	I0918 19:39:05.405053   15490 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7655/.minikube/machines/addons-656419/id_rsa Username:docker}
	W0918 19:39:05.405770   15490 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:55558->192.168.39.154:22: read: connection reset by peer
	I0918 19:39:05.405815   15490 retry.go:31] will retry after 262.408803ms: ssh: handshake failed: read tcp 192.168.39.1:55558->192.168.39.154:22: read: connection reset by peer
	I0918 19:39:05.412657   15490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37475
	I0918 19:39:05.413169   15490 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:05.413863   15490 main.go:141] libmachine: Using API Version  1
	I0918 19:39:05.413888   15490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:05.414386   15490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39247
	I0918 19:39:05.414583   15490 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:05.414862   15490 main.go:141] libmachine: (addons-656419) Calling .GetState
	I0918 19:39:05.414934   15490 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:05.415609   15490 main.go:141] libmachine: Using API Version  1
	I0918 19:39:05.415634   15490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:05.416079   15490 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:05.416319   15490 main.go:141] libmachine: (addons-656419) Calling .GetState
	I0918 19:39:05.416744   15490 main.go:141] libmachine: (addons-656419) Calling .DriverName
	I0918 19:39:05.417893   15490 main.go:141] libmachine: (addons-656419) Calling .DriverName
	I0918 19:39:05.418088   15490 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0918 19:39:05.418105   15490 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0918 19:39:05.418126   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHHostname
	I0918 19:39:05.418879   15490 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0918 19:39:05.420727   15490 out.go:177]   - Using image docker.io/busybox:stable
	I0918 19:39:05.421662   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:39:05.422015   15490 main.go:141] libmachine: (addons-656419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:5c", ip: ""} in network mk-addons-656419: {Iface:virbr1 ExpiryTime:2024-09-18 20:38:25 +0000 UTC Type:0 Mac:52:54:00:61:13:5c Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-656419 Clientid:01:52:54:00:61:13:5c}
	I0918 19:39:05.422036   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined IP address 192.168.39.154 and MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:39:05.422262   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHPort
	I0918 19:39:05.422428   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHKeyPath
	I0918 19:39:05.422557   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHUsername
	I0918 19:39:05.422658   15490 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7655/.minikube/machines/addons-656419/id_rsa Username:docker}
	I0918 19:39:05.422705   15490 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0918 19:39:05.422728   15490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0918 19:39:05.422752   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHHostname
	W0918 19:39:05.424927   15490 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:55562->192.168.39.154:22: read: connection reset by peer
	I0918 19:39:05.424956   15490 retry.go:31] will retry after 157.615265ms: ssh: handshake failed: read tcp 192.168.39.1:55562->192.168.39.154:22: read: connection reset by peer
	I0918 19:39:05.425889   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:39:05.426279   15490 main.go:141] libmachine: (addons-656419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:5c", ip: ""} in network mk-addons-656419: {Iface:virbr1 ExpiryTime:2024-09-18 20:38:25 +0000 UTC Type:0 Mac:52:54:00:61:13:5c Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-656419 Clientid:01:52:54:00:61:13:5c}
	I0918 19:39:05.426309   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined IP address 192.168.39.154 and MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:39:05.426451   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHPort
	I0918 19:39:05.426639   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHKeyPath
	I0918 19:39:05.426789   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHUsername
	I0918 19:39:05.426909   15490 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7655/.minikube/machines/addons-656419/id_rsa Username:docker}
	W0918 19:39:05.441208   15490 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:55578->192.168.39.154:22: read: connection reset by peer
	I0918 19:39:05.441241   15490 retry.go:31] will retry after 187.191571ms: ssh: handshake failed: read tcp 192.168.39.1:55578->192.168.39.154:22: read: connection reset by peer
	I0918 19:39:05.715873   15490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0918 19:39:05.854580   15490 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0918 19:39:05.854609   15490 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0918 19:39:05.869492   15490 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0918 19:39:05.869521   15490 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0918 19:39:05.934963   15490 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0918 19:39:05.934984   15490 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0918 19:39:05.982737   15490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0918 19:39:05.989316   15490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 19:39:06.037785   15490 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 19:39:06.037941   15490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0918 19:39:06.043462   15490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0918 19:39:06.060393   15490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0918 19:39:06.084315   15490 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0918 19:39:06.084354   15490 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0918 19:39:06.097088   15490 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0918 19:39:06.097113   15490 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0918 19:39:06.130278   15490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0918 19:39:06.165648   15490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0918 19:39:06.204880   15490 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0918 19:39:06.204929   15490 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0918 19:39:06.284518   15490 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0918 19:39:06.284583   15490 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0918 19:39:06.464377   15490 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0918 19:39:06.464406   15490 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0918 19:39:06.530698   15490 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0918 19:39:06.530726   15490 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0918 19:39:06.624953   15490 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0918 19:39:06.624979   15490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0918 19:39:06.663561   15490 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0918 19:39:06.663591   15490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0918 19:39:06.667470   15490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0918 19:39:06.680735   15490 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0918 19:39:06.680759   15490 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0918 19:39:06.756030   15490 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0918 19:39:06.756056   15490 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0918 19:39:06.933806   15490 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0918 19:39:06.933830   15490 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0918 19:39:07.037316   15490 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0918 19:39:07.037344   15490 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0918 19:39:07.071614   15490 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0918 19:39:07.071643   15490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0918 19:39:07.154040   15490 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0918 19:39:07.154070   15490 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0918 19:39:07.170652   15490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0918 19:39:07.250780   15490 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0918 19:39:07.250809   15490 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0918 19:39:07.332602   15490 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0918 19:39:07.332628   15490 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0918 19:39:07.340491   15490 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0918 19:39:07.340515   15490 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0918 19:39:07.345288   15490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0918 19:39:07.374473   15490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0918 19:39:07.599827   15490 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0918 19:39:07.599851   15490 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0918 19:39:07.619213   15490 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0918 19:39:07.619247   15490 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0918 19:39:07.633969   15490 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0918 19:39:07.633992   15490 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0918 19:39:07.643318   15490 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0918 19:39:07.643342   15490 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0918 19:39:07.822888   15490 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0918 19:39:07.822912   15490 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0918 19:39:07.895153   15490 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0918 19:39:07.895184   15490 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0918 19:39:07.895380   15490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0918 19:39:07.955165   15490 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0918 19:39:07.955187   15490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0918 19:39:08.010988   15490 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0918 19:39:08.011014   15490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0918 19:39:08.082280   15490 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0918 19:39:08.082300   15490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0918 19:39:08.290724   15490 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0918 19:39:08.290747   15490 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0918 19:39:08.388509   15490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0918 19:39:08.424897   15490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0918 19:39:08.580446   15490 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0918 19:39:08.580476   15490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0918 19:39:08.850480   15490 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0918 19:39:08.850505   15490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0918 19:39:09.408634   15490 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0918 19:39:09.408662   15490 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0918 19:39:09.668406   15490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0918 19:39:12.363240   15490 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0918 19:39:12.363291   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHHostname
	I0918 19:39:12.366970   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:39:12.367538   15490 main.go:141] libmachine: (addons-656419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:5c", ip: ""} in network mk-addons-656419: {Iface:virbr1 ExpiryTime:2024-09-18 20:38:25 +0000 UTC Type:0 Mac:52:54:00:61:13:5c Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-656419 Clientid:01:52:54:00:61:13:5c}
	I0918 19:39:12.367569   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined IP address 192.168.39.154 and MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:39:12.367748   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHPort
	I0918 19:39:12.367964   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHKeyPath
	I0918 19:39:12.368147   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHUsername
	I0918 19:39:12.368328   15490 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7655/.minikube/machines/addons-656419/id_rsa Username:docker}
	I0918 19:39:13.748596   15490 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0918 19:39:13.936775   15490 addons.go:234] Setting addon gcp-auth=true in "addons-656419"
	I0918 19:39:13.936840   15490 host.go:66] Checking if "addons-656419" exists ...
	I0918 19:39:13.937361   15490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0918 19:39:13.937409   15490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:13.958358   15490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39345
	I0918 19:39:13.958835   15490 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:13.959454   15490 main.go:141] libmachine: Using API Version  1
	I0918 19:39:13.959480   15490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:13.960168   15490 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:13.960806   15490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0918 19:39:13.960847   15490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:13.980621   15490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39375
	I0918 19:39:13.981259   15490 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:13.981834   15490 main.go:141] libmachine: Using API Version  1
	I0918 19:39:13.981856   15490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:13.982399   15490 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:13.982643   15490 main.go:141] libmachine: (addons-656419) Calling .GetState
	I0918 19:39:13.984884   15490 main.go:141] libmachine: (addons-656419) Calling .DriverName
	I0918 19:39:13.985194   15490 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0918 19:39:13.985225   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHHostname
	I0918 19:39:13.988760   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:39:13.989356   15490 main.go:141] libmachine: (addons-656419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:13:5c", ip: ""} in network mk-addons-656419: {Iface:virbr1 ExpiryTime:2024-09-18 20:38:25 +0000 UTC Type:0 Mac:52:54:00:61:13:5c Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-656419 Clientid:01:52:54:00:61:13:5c}
	I0918 19:39:13.989395   15490 main.go:141] libmachine: (addons-656419) DBG | domain addons-656419 has defined IP address 192.168.39.154 and MAC address 52:54:00:61:13:5c in network mk-addons-656419
	I0918 19:39:13.989601   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHPort
	I0918 19:39:13.989846   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHKeyPath
	I0918 19:39:13.990078   15490 main.go:141] libmachine: (addons-656419) Calling .GetSSHUsername
	I0918 19:39:13.990513   15490 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7655/.minikube/machines/addons-656419/id_rsa Username:docker}
	I0918 19:39:17.139057   15490 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (11.423144296s)
	I0918 19:39:17.139112   15490 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:17.139123   15490 main.go:141] libmachine: (addons-656419) Calling .Close
	I0918 19:39:17.139125   15490 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (11.149785723s)
	I0918 19:39:17.139164   15490 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:17.139072   15490 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (11.156299913s)
	I0918 19:39:17.139180   15490 main.go:141] libmachine: (addons-656419) Calling .Close
	I0918 19:39:17.139220   15490 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:17.139237   15490 main.go:141] libmachine: (addons-656419) Calling .Close
	I0918 19:39:17.139251   15490 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (11.101292228s)
	I0918 19:39:17.139276   15490 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0918 19:39:17.139222   15490 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (11.101396195s)
	I0918 19:39:17.139452   15490 main.go:141] libmachine: (addons-656419) DBG | Closing plugin on server side
	I0918 19:39:17.139492   15490 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:17.139500   15490 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:17.139508   15490 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:17.139515   15490 main.go:141] libmachine: (addons-656419) Calling .Close
	I0918 19:39:17.139618   15490 main.go:141] libmachine: (addons-656419) DBG | Closing plugin on server side
	I0918 19:39:17.139633   15490 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:17.139647   15490 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:17.139661   15490 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:17.139668   15490 main.go:141] libmachine: (addons-656419) Calling .Close
	I0918 19:39:17.139690   15490 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:17.139699   15490 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:17.139709   15490 addons.go:475] Verifying addon ingress=true in "addons-656419"
	I0918 19:39:17.140016   15490 main.go:141] libmachine: (addons-656419) DBG | Closing plugin on server side
	I0918 19:39:17.140263   15490 node_ready.go:35] waiting up to 6m0s for node "addons-656419" to be "Ready" ...
	I0918 19:39:17.140456   15490 main.go:141] libmachine: (addons-656419) DBG | Closing plugin on server side
	I0918 19:39:17.140531   15490 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:17.140540   15490 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:17.140774   15490 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:17.140795   15490 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:17.140809   15490 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:17.140820   15490 main.go:141] libmachine: (addons-656419) Calling .Close
	I0918 19:39:17.141107   15490 main.go:141] libmachine: (addons-656419) DBG | Closing plugin on server side
	I0918 19:39:17.141154   15490 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:17.141169   15490 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:17.142103   15490 out.go:177] * Verifying ingress addon...
	I0918 19:39:17.144859   15490 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0918 19:39:17.162837   15490 node_ready.go:49] node "addons-656419" has status "Ready":"True"
	I0918 19:39:17.162862   15490 node_ready.go:38] duration metric: took 22.578015ms for node "addons-656419" to be "Ready" ...
	I0918 19:39:17.162871   15490 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 19:39:17.176239   15490 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0918 19:39:17.176269   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:17.189877   15490 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-bjkr7" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:17.244730   15490 pod_ready.go:93] pod "coredns-7c65d6cfc9-bjkr7" in "kube-system" namespace has status "Ready":"True"
	I0918 19:39:17.244759   15490 pod_ready.go:82] duration metric: took 54.853304ms for pod "coredns-7c65d6cfc9-bjkr7" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:17.244771   15490 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-jgzt9" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:17.651200   15490 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-656419" context rescaled to 1 replicas
	I0918 19:39:17.681318   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:18.268400   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:18.695808   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:19.204812   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:19.348106   15490 pod_ready.go:103] pod "coredns-7c65d6cfc9-jgzt9" in "kube-system" namespace has status "Ready":"False"
	I0918 19:39:19.703235   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:20.159376   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:20.565563   15490 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (14.505133638s)
	I0918 19:39:20.565625   15490 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:20.565636   15490 main.go:141] libmachine: (addons-656419) Calling .Close
	I0918 19:39:20.565644   15490 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (14.435327159s)
	I0918 19:39:20.565684   15490 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:20.565703   15490 main.go:141] libmachine: (addons-656419) Calling .Close
	I0918 19:39:20.565700   15490 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (14.400013451s)
	I0918 19:39:20.565731   15490 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:20.565747   15490 main.go:141] libmachine: (addons-656419) Calling .Close
	I0918 19:39:20.565749   15490 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (13.898251034s)
	I0918 19:39:20.565767   15490 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:20.565780   15490 main.go:141] libmachine: (addons-656419) Calling .Close
	I0918 19:39:20.565830   15490 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (13.395134816s)
	I0918 19:39:20.565929   15490 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:20.565958   15490 main.go:141] libmachine: (addons-656419) Calling .Close
	I0918 19:39:20.565959   15490 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:20.566087   15490 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:20.566095   15490 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:20.566103   15490 main.go:141] libmachine: (addons-656419) Calling .Close
	I0918 19:39:20.566134   15490 main.go:141] libmachine: (addons-656419) DBG | Closing plugin on server side
	I0918 19:39:20.566143   15490 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (13.191636861s)
	I0918 19:39:20.566159   15490 main.go:141] libmachine: (addons-656419) DBG | Closing plugin on server side
	I0918 19:39:20.566166   15490 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:20.566185   15490 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:20.566193   15490 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:20.566195   15490 main.go:141] libmachine: (addons-656419) Calling .Close
	I0918 19:39:20.566201   15490 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:20.566207   15490 main.go:141] libmachine: (addons-656419) Calling .Close
	I0918 19:39:20.566310   15490 main.go:141] libmachine: (addons-656419) DBG | Closing plugin on server side
	I0918 19:39:20.566355   15490 main.go:141] libmachine: (addons-656419) DBG | Closing plugin on server side
	I0918 19:39:20.566377   15490 main.go:141] libmachine: (addons-656419) DBG | Closing plugin on server side
	I0918 19:39:20.566472   15490 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (12.671064965s)
	I0918 19:39:20.566495   15490 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:20.566504   15490 main.go:141] libmachine: (addons-656419) Calling .Close
	I0918 19:39:20.566614   15490 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (12.178065801s)
	I0918 19:39:20.566631   15490 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:20.566640   15490 main.go:141] libmachine: (addons-656419) Calling .Close
	I0918 19:39:20.566773   15490 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (12.141815058s)
	W0918 19:39:20.566803   15490 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0918 19:39:20.566823   15490 retry.go:31] will retry after 177.100364ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0918 19:39:20.566880   15490 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:20.566890   15490 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:20.566898   15490 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:20.566906   15490 main.go:141] libmachine: (addons-656419) Calling .Close
	I0918 19:39:20.566987   15490 main.go:141] libmachine: (addons-656419) DBG | Closing plugin on server side
	I0918 19:39:20.567016   15490 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:20.567023   15490 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:20.565930   15490 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (13.220550385s)
	I0918 19:39:20.567844   15490 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:20.567856   15490 main.go:141] libmachine: (addons-656419) Calling .Close
	I0918 19:39:20.565933   15490 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:20.567903   15490 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:20.567911   15490 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:20.567918   15490 main.go:141] libmachine: (addons-656419) Calling .Close
	I0918 19:39:20.567999   15490 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:20.568007   15490 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:20.568158   15490 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:20.568182   15490 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:20.568201   15490 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:20.568236   15490 main.go:141] libmachine: (addons-656419) Calling .Close
	I0918 19:39:20.568377   15490 main.go:141] libmachine: (addons-656419) DBG | Closing plugin on server side
	I0918 19:39:20.568417   15490 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:20.568431   15490 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:20.568439   15490 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:20.568446   15490 main.go:141] libmachine: (addons-656419) Calling .Close
	I0918 19:39:20.568711   15490 main.go:141] libmachine: (addons-656419) DBG | Closing plugin on server side
	I0918 19:39:20.568735   15490 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:20.568741   15490 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:20.568747   15490 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:20.568753   15490 main.go:141] libmachine: (addons-656419) Calling .Close
	I0918 19:39:20.568805   15490 main.go:141] libmachine: (addons-656419) DBG | Closing plugin on server side
	I0918 19:39:20.569107   15490 main.go:141] libmachine: (addons-656419) DBG | Closing plugin on server side
	I0918 19:39:20.569137   15490 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:20.569143   15490 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:20.569455   15490 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:20.569469   15490 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:20.569477   15490 addons.go:475] Verifying addon registry=true in "addons-656419"
	I0918 19:39:20.569645   15490 main.go:141] libmachine: (addons-656419) DBG | Closing plugin on server side
	I0918 19:39:20.569703   15490 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:20.569710   15490 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:20.569821   15490 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:20.569841   15490 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:20.569858   15490 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:20.569874   15490 main.go:141] libmachine: (addons-656419) Calling .Close
	I0918 19:39:20.569932   15490 main.go:141] libmachine: (addons-656419) DBG | Closing plugin on server side
	I0918 19:39:20.569964   15490 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:20.569986   15490 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:20.570003   15490 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:20.570019   15490 main.go:141] libmachine: (addons-656419) Calling .Close
	I0918 19:39:20.570631   15490 main.go:141] libmachine: (addons-656419) DBG | Closing plugin on server side
	I0918 19:39:20.570669   15490 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:20.570668   15490 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:20.570685   15490 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:20.570718   15490 main.go:141] libmachine: (addons-656419) DBG | Closing plugin on server side
	I0918 19:39:20.570740   15490 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:20.570746   15490 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:20.570810   15490 main.go:141] libmachine: (addons-656419) DBG | Closing plugin on server side
	I0918 19:39:20.570830   15490 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:20.570836   15490 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:20.570676   15490 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:20.571176   15490 addons.go:475] Verifying addon metrics-server=true in "addons-656419"
	I0918 19:39:20.571583   15490 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (14.528090246s)
	I0918 19:39:20.571641   15490 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:20.571661   15490 main.go:141] libmachine: (addons-656419) Calling .Close
	I0918 19:39:20.572281   15490 main.go:141] libmachine: (addons-656419) DBG | Closing plugin on server side
	I0918 19:39:20.572349   15490 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:20.572368   15490 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:20.572398   15490 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:20.572421   15490 main.go:141] libmachine: (addons-656419) Calling .Close
	I0918 19:39:20.572618   15490 main.go:141] libmachine: (addons-656419) DBG | Closing plugin on server side
	I0918 19:39:20.572650   15490 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:20.572657   15490 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:20.572957   15490 out.go:177] * Verifying registry addon...
	I0918 19:39:20.572961   15490 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-656419 service yakd-dashboard -n yakd-dashboard
	
	I0918 19:39:20.575690   15490 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0918 19:39:20.605932   15490 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0918 19:39:20.605956   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:20.616741   15490 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:20.616766   15490 main.go:141] libmachine: (addons-656419) Calling .Close
	I0918 19:39:20.617025   15490 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:20.617040   15490 main.go:141] libmachine: Making call to close connection to plugin binary
	W0918 19:39:20.617134   15490 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0918 19:39:20.679828   15490 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:20.679849   15490 main.go:141] libmachine: (addons-656419) Calling .Close
	I0918 19:39:20.680195   15490 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:20.680218   15490 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:20.680232   15490 main.go:141] libmachine: (addons-656419) DBG | Closing plugin on server side
	I0918 19:39:20.745066   15490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0918 19:39:20.830677   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:21.235435   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:21.254777   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:21.396534   15490 pod_ready.go:103] pod "coredns-7c65d6cfc9-jgzt9" in "kube-system" namespace has status "Ready":"False"
	I0918 19:39:21.664027   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:21.665554   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:22.115450   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:22.183620   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:22.312461   15490 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (12.644002815s)
	I0918 19:39:22.312521   15490 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:22.312538   15490 main.go:141] libmachine: (addons-656419) Calling .Close
	I0918 19:39:22.312473   15490 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (8.327255092s)
	I0918 19:39:22.312807   15490 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:22.312819   15490 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:22.312838   15490 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:22.312847   15490 main.go:141] libmachine: (addons-656419) Calling .Close
	I0918 19:39:22.313142   15490 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:22.313159   15490 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:22.313171   15490 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-656419"
	I0918 19:39:22.315474   15490 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0918 19:39:22.315474   15490 out.go:177] * Verifying csi-hostpath-driver addon...
	I0918 19:39:22.318054   15490 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0918 19:39:22.320391   15490 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0918 19:39:22.320416   15490 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0918 19:39:22.320434   15490 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0918 19:39:22.425678   15490 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0918 19:39:22.425713   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:22.445344   15490 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0918 19:39:22.445370   15490 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0918 19:39:22.501437   15490 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0918 19:39:22.501463   15490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0918 19:39:22.574124   15490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0918 19:39:22.591421   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:22.652980   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:22.825512   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:23.078862   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:23.151794   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:23.325701   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:23.344749   15490 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.599631025s)
	I0918 19:39:23.344807   15490 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:23.344822   15490 main.go:141] libmachine: (addons-656419) Calling .Close
	I0918 19:39:23.345112   15490 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:23.345131   15490 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:23.345140   15490 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:23.345148   15490 main.go:141] libmachine: (addons-656419) Calling .Close
	I0918 19:39:23.346500   15490 main.go:141] libmachine: (addons-656419) DBG | Closing plugin on server side
	I0918 19:39:23.346509   15490 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:23.346524   15490 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:23.585494   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:23.688740   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:23.799462   15490 pod_ready.go:98] pod "coredns-7c65d6cfc9-jgzt9" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-18 19:39:23 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-18 19:39:05 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-18 19:39:05 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-18 19:39:05 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-18 19:39:05 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.154 HostIPs:[{IP:192.168.39
.154}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-09-18 19:39:05 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-18 19:39:06 +0000 UTC,FinishedAt:2024-09-18 19:39:22 +0000 UTC,ContainerID:docker://0889453fbfd022664a8ccfb60d0e8870a55226420da6b2ff819ec27f178c59e9,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://0889453fbfd022664a8ccfb60d0e8870a55226420da6b2ff819ec27f178c59e9 Started:0xc0025fc4d0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc000706960} {Name:kube-api-access-6ljxg MountPath:/var/run/secrets/kubernetes.io/serviceacc
ount ReadOnly:true RecursiveReadOnly:0xc000706970}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0918 19:39:23.799504   15490 pod_ready.go:82] duration metric: took 6.554723754s for pod "coredns-7c65d6cfc9-jgzt9" in "kube-system" namespace to be "Ready" ...
	E0918 19:39:23.799523   15490 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-jgzt9" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-18 19:39:23 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-18 19:39:05 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-18 19:39:05 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-18 19:39:05 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-18 19:39:05 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.154 HostIPs:[{IP:192.168.39.154}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-09-18 19:39:05 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-18 19:39:06 +0000 UTC,FinishedAt:2024-09-18 19:39:22 +0000 UTC,ContainerID:docker://0889453fbfd022664a8ccfb60d0e8870a55226420da6b2ff819ec27f178c59e9,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://0889453fbfd022664a8ccfb60d0e8870a55226420da6b2ff819ec27f178c59e9 Started:0xc0025fc4d0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc000706960} {Name:kube-api-access-6ljxg MountPath:/var/run/sec
rets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc000706970}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0918 19:39:23.799540   15490 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-656419" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:23.804816   15490 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.230649595s)
	I0918 19:39:23.804863   15490 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:23.804875   15490 main.go:141] libmachine: (addons-656419) Calling .Close
	I0918 19:39:23.805182   15490 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:23.805204   15490 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:23.805219   15490 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:23.805226   15490 main.go:141] libmachine: (addons-656419) Calling .Close
	I0918 19:39:23.805230   15490 main.go:141] libmachine: (addons-656419) DBG | Closing plugin on server side
	I0918 19:39:23.805442   15490 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:23.805458   15490 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:23.805461   15490 main.go:141] libmachine: (addons-656419) DBG | Closing plugin on server side
	I0918 19:39:23.808120   15490 addons.go:475] Verifying addon gcp-auth=true in "addons-656419"
	I0918 19:39:23.810356   15490 out.go:177] * Verifying gcp-auth addon...
	I0918 19:39:23.813262   15490 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0918 19:39:23.832066   15490 pod_ready.go:93] pod "etcd-addons-656419" in "kube-system" namespace has status "Ready":"True"
	I0918 19:39:23.832089   15490 pod_ready.go:82] duration metric: took 32.54174ms for pod "etcd-addons-656419" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:23.832100   15490 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-656419" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:23.839596   15490 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0918 19:39:23.845220   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:23.849576   15490 pod_ready.go:93] pod "kube-apiserver-addons-656419" in "kube-system" namespace has status "Ready":"True"
	I0918 19:39:23.849612   15490 pod_ready.go:82] duration metric: took 17.50388ms for pod "kube-apiserver-addons-656419" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:23.849627   15490 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-656419" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:23.859195   15490 pod_ready.go:93] pod "kube-controller-manager-addons-656419" in "kube-system" namespace has status "Ready":"True"
	I0918 19:39:23.859245   15490 pod_ready.go:82] duration metric: took 9.583832ms for pod "kube-controller-manager-addons-656419" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:23.859276   15490 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qc45h" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:23.870935   15490 pod_ready.go:93] pod "kube-proxy-qc45h" in "kube-system" namespace has status "Ready":"True"
	I0918 19:39:23.870960   15490 pod_ready.go:82] duration metric: took 11.675145ms for pod "kube-proxy-qc45h" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:23.870971   15490 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-656419" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:24.082271   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:24.149251   15490 pod_ready.go:93] pod "kube-scheduler-addons-656419" in "kube-system" namespace has status "Ready":"True"
	I0918 19:39:24.149275   15490 pod_ready.go:82] duration metric: took 278.295203ms for pod "kube-scheduler-addons-656419" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:24.149293   15490 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-9748w" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:24.183406   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:24.325145   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:24.580343   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:24.648688   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:24.824714   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:25.080392   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:25.180740   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:25.324837   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:25.580324   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:25.649551   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:25.824664   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:26.079695   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:26.149744   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:26.156096   15490 pod_ready.go:103] pod "metrics-server-84c5f94fbc-9748w" in "kube-system" namespace has status "Ready":"False"
	I0918 19:39:26.325823   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:26.580702   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:26.649720   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:26.829771   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:27.079262   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:27.149716   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:27.325158   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:27.581049   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:27.649926   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:27.826275   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:28.080218   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:28.149723   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:28.326798   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:28.580653   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:28.650116   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:28.661835   15490 pod_ready.go:103] pod "metrics-server-84c5f94fbc-9748w" in "kube-system" namespace has status "Ready":"False"
	I0918 19:39:28.826084   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:29.079391   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:29.150562   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:29.325416   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:29.801547   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:29.845893   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:29.852603   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:30.110332   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:30.161947   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:30.326765   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:30.607485   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:30.660008   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:30.668929   15490 pod_ready.go:103] pod "metrics-server-84c5f94fbc-9748w" in "kube-system" namespace has status "Ready":"False"
	I0918 19:39:30.827710   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:31.079998   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:31.182270   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:31.326401   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:31.580021   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:31.650225   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:31.825621   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:32.079804   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:32.151915   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:32.324679   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:32.579452   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:32.649298   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:32.826244   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:33.080272   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:33.150025   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:33.155265   15490 pod_ready.go:103] pod "metrics-server-84c5f94fbc-9748w" in "kube-system" namespace has status "Ready":"False"
	I0918 19:39:33.324871   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:33.579917   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:33.650497   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:33.825071   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:34.080569   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:34.150905   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:34.324998   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:34.589217   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:34.652295   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:34.826968   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:35.080408   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:35.149479   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:35.155339   15490 pod_ready.go:103] pod "metrics-server-84c5f94fbc-9748w" in "kube-system" namespace has status "Ready":"False"
	I0918 19:39:35.325700   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:35.579462   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:35.649357   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:35.825577   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:36.080530   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:36.149641   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:36.325676   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:36.579316   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:36.650014   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:36.825154   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:37.079713   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:37.149970   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:37.155855   15490 pod_ready.go:103] pod "metrics-server-84c5f94fbc-9748w" in "kube-system" namespace has status "Ready":"False"
	I0918 19:39:37.325489   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:37.579718   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:37.649215   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:37.826032   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:38.079153   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:38.148963   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:38.330048   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:38.580289   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:38.650386   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:39.251227   15490 pod_ready.go:103] pod "metrics-server-84c5f94fbc-9748w" in "kube-system" namespace has status "Ready":"False"
	I0918 19:39:39.251662   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:39.251700   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:39.251995   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:39.347692   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:39.582129   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:39.653568   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:39.826121   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:40.091794   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:40.185823   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:40.326468   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:40.592416   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:40.650641   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:40.827594   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:41.082228   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:41.150938   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:41.325545   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:41.580891   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:41.649772   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:41.655940   15490 pod_ready.go:103] pod "metrics-server-84c5f94fbc-9748w" in "kube-system" namespace has status "Ready":"False"
	I0918 19:39:41.825755   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:42.079617   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:42.149850   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:42.325723   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:42.580641   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:42.649095   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:42.825321   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:43.079379   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:43.148608   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:43.325750   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:43.580234   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:43.650370   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:43.657504   15490 pod_ready.go:103] pod "metrics-server-84c5f94fbc-9748w" in "kube-system" namespace has status "Ready":"False"
	I0918 19:39:44.219287   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:44.222267   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:44.223217   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:44.326951   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:44.582204   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:44.650458   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:44.826777   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:45.079758   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:45.149382   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:45.338786   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:45.579711   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:45.649341   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:45.825360   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:46.079719   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:46.149741   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:46.155673   15490 pod_ready.go:103] pod "metrics-server-84c5f94fbc-9748w" in "kube-system" namespace has status "Ready":"False"
	I0918 19:39:46.328443   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:46.580635   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:46.653733   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:46.827595   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:47.079140   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:47.152974   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:47.325472   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:47.579485   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:47.649405   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:47.826078   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:48.080133   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:48.149881   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:48.325702   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:48.579318   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:48.648488   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:48.655209   15490 pod_ready.go:103] pod "metrics-server-84c5f94fbc-9748w" in "kube-system" namespace has status "Ready":"False"
	I0918 19:39:48.825353   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:49.080365   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:49.149533   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:49.324760   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:49.581173   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:49.650879   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:49.825710   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:50.080996   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:50.151977   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:50.325589   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:50.581314   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:50.649617   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:50.657743   15490 pod_ready.go:103] pod "metrics-server-84c5f94fbc-9748w" in "kube-system" namespace has status "Ready":"False"
	I0918 19:39:50.826840   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:51.081255   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:51.153066   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:51.329199   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:51.623606   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:51.657047   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:51.860786   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:52.095061   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:52.189005   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:52.466300   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:52.580190   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:52.730145   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:52.733239   15490 pod_ready.go:103] pod "metrics-server-84c5f94fbc-9748w" in "kube-system" namespace has status "Ready":"False"
	I0918 19:39:52.825746   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:53.079938   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:53.152745   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:53.324965   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:53.580453   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:53.650768   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:53.825572   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:54.084004   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:54.150220   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:54.326646   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:54.582827   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:54.649782   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:54.919899   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:55.079607   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:55.150269   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:55.156218   15490 pod_ready.go:103] pod "metrics-server-84c5f94fbc-9748w" in "kube-system" namespace has status "Ready":"False"
	I0918 19:39:55.325993   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:55.580308   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:55.650424   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:55.825786   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:56.084560   15490 kapi.go:107] duration metric: took 35.508866881s to wait for kubernetes.io/minikube-addons=registry ...
	I0918 19:39:56.309041   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:56.326417   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:56.652264   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:56.829559   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:57.155276   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:57.175890   15490 pod_ready.go:103] pod "metrics-server-84c5f94fbc-9748w" in "kube-system" namespace has status "Ready":"False"
	I0918 19:39:57.336152   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:57.658369   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:57.825197   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:58.149209   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:58.325485   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:58.649174   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:58.826424   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:59.148648   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:59.156040   15490 pod_ready.go:93] pod "metrics-server-84c5f94fbc-9748w" in "kube-system" namespace has status "Ready":"True"
	I0918 19:39:59.156072   15490 pod_ready.go:82] duration metric: took 35.006770221s for pod "metrics-server-84c5f94fbc-9748w" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:59.156087   15490 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-mpg2d" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:59.162195   15490 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-mpg2d" in "kube-system" namespace has status "Ready":"True"
	I0918 19:39:59.162228   15490 pod_ready.go:82] duration metric: took 6.131916ms for pod "nvidia-device-plugin-daemonset-mpg2d" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:59.162255   15490 pod_ready.go:39] duration metric: took 41.999371939s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 19:39:59.162290   15490 api_server.go:52] waiting for apiserver process to appear ...
	I0918 19:39:59.162352   15490 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 19:39:59.193001   15490 api_server.go:72] duration metric: took 54.005690346s to wait for apiserver process to appear ...
	I0918 19:39:59.193032   15490 api_server.go:88] waiting for apiserver healthz status ...
	I0918 19:39:59.193057   15490 api_server.go:253] Checking apiserver healthz at https://192.168.39.154:8443/healthz ...
	I0918 19:39:59.197656   15490 api_server.go:279] https://192.168.39.154:8443/healthz returned 200:
	ok
	I0918 19:39:59.198756   15490 api_server.go:141] control plane version: v1.31.1
	I0918 19:39:59.198784   15490 api_server.go:131] duration metric: took 5.743696ms to wait for apiserver health ...
	I0918 19:39:59.198793   15490 system_pods.go:43] waiting for kube-system pods to appear ...
	I0918 19:39:59.210237   15490 system_pods.go:59] 18 kube-system pods found
	I0918 19:39:59.210283   15490 system_pods.go:61] "coredns-7c65d6cfc9-bjkr7" [b5394a3f-1d23-4e69-b147-6d537b1f8efc] Running
	I0918 19:39:59.210298   15490 system_pods.go:61] "csi-hostpath-attacher-0" [a556d3b6-f628-4fea-a9b7-39b75589fac7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0918 19:39:59.210310   15490 system_pods.go:61] "csi-hostpath-resizer-0" [49fee6d2-f921-4588-95dc-aeebba839421] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0918 19:39:59.210322   15490 system_pods.go:61] "csi-hostpathplugin-c8vf5" [011ca614-7c60-4d8e-a0f4-8ac9d166514a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0918 19:39:59.210330   15490 system_pods.go:61] "etcd-addons-656419" [6a78eda2-a1e8-4ee4-a3f9-d1cd6ff70448] Running
	I0918 19:39:59.210340   15490 system_pods.go:61] "kube-apiserver-addons-656419" [2d5f6e1b-b2bb-4ebf-b32a-990aa2db6645] Running
	I0918 19:39:59.210350   15490 system_pods.go:61] "kube-controller-manager-addons-656419" [2cbd657a-4f1c-4cc2-99af-aad91d41a600] Running
	I0918 19:39:59.210355   15490 system_pods.go:61] "kube-ingress-dns-minikube" [e4914a00-328a-4acc-8954-d7a7130d5ce3] Running
	I0918 19:39:59.210362   15490 system_pods.go:61] "kube-proxy-qc45h" [f0937c78-8ea7-45f2-898d-43fb13cf4ad8] Running
	I0918 19:39:59.210368   15490 system_pods.go:61] "kube-scheduler-addons-656419" [3f19a718-6190-42e1-bced-8da05f9c3d0a] Running
	I0918 19:39:59.210374   15490 system_pods.go:61] "metrics-server-84c5f94fbc-9748w" [af8930c7-4faf-4d6d-b064-02a9a6ae3479] Running
	I0918 19:39:59.210380   15490 system_pods.go:61] "nvidia-device-plugin-daemonset-mpg2d" [6a45797b-d7b4-423b-93f3-b1a9a57eba5f] Running
	I0918 19:39:59.210386   15490 system_pods.go:61] "registry-66c9cd494c-vvhf9" [456d61ad-102d-4e2c-9b99-6bbce9fe2788] Running
	I0918 19:39:59.210391   15490 system_pods.go:61] "registry-proxy-vrcfx" [167c8d8a-ffcc-4e8d-be37-1288a0ac0e73] Running
	I0918 19:39:59.210401   15490 system_pods.go:61] "snapshot-controller-56fcc65765-clrrz" [bb8f5bc8-83cc-49e9-9cf1-4c45a423de87] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0918 19:39:59.210410   15490 system_pods.go:61] "snapshot-controller-56fcc65765-xctkv" [25335c1a-35f9-4d94-9cea-0afe13fa7191] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0918 19:39:59.210419   15490 system_pods.go:61] "storage-provisioner" [60dc227b-8c52-4c86-946d-00e7e31184d2] Running
	I0918 19:39:59.210426   15490 system_pods.go:61] "tiller-deploy-b48cc5f79-9vcgn" [3f9da820-37d4-44d6-a61e-f66ccc7d7588] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0918 19:39:59.210438   15490 system_pods.go:74] duration metric: took 11.637589ms to wait for pod list to return data ...
	I0918 19:39:59.210446   15490 default_sa.go:34] waiting for default service account to be created ...
	I0918 19:39:59.214581   15490 default_sa.go:45] found service account: "default"
	I0918 19:39:59.214611   15490 default_sa.go:55] duration metric: took 4.157862ms for default service account to be created ...
	I0918 19:39:59.214623   15490 system_pods.go:116] waiting for k8s-apps to be running ...
	I0918 19:39:59.224814   15490 system_pods.go:86] 18 kube-system pods found
	I0918 19:39:59.224849   15490 system_pods.go:89] "coredns-7c65d6cfc9-bjkr7" [b5394a3f-1d23-4e69-b147-6d537b1f8efc] Running
	I0918 19:39:59.224863   15490 system_pods.go:89] "csi-hostpath-attacher-0" [a556d3b6-f628-4fea-a9b7-39b75589fac7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0918 19:39:59.224871   15490 system_pods.go:89] "csi-hostpath-resizer-0" [49fee6d2-f921-4588-95dc-aeebba839421] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0918 19:39:59.224880   15490 system_pods.go:89] "csi-hostpathplugin-c8vf5" [011ca614-7c60-4d8e-a0f4-8ac9d166514a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0918 19:39:59.224887   15490 system_pods.go:89] "etcd-addons-656419" [6a78eda2-a1e8-4ee4-a3f9-d1cd6ff70448] Running
	I0918 19:39:59.224894   15490 system_pods.go:89] "kube-apiserver-addons-656419" [2d5f6e1b-b2bb-4ebf-b32a-990aa2db6645] Running
	I0918 19:39:59.224899   15490 system_pods.go:89] "kube-controller-manager-addons-656419" [2cbd657a-4f1c-4cc2-99af-aad91d41a600] Running
	I0918 19:39:59.224928   15490 system_pods.go:89] "kube-ingress-dns-minikube" [e4914a00-328a-4acc-8954-d7a7130d5ce3] Running
	I0918 19:39:59.224938   15490 system_pods.go:89] "kube-proxy-qc45h" [f0937c78-8ea7-45f2-898d-43fb13cf4ad8] Running
	I0918 19:39:59.224951   15490 system_pods.go:89] "kube-scheduler-addons-656419" [3f19a718-6190-42e1-bced-8da05f9c3d0a] Running
	I0918 19:39:59.224958   15490 system_pods.go:89] "metrics-server-84c5f94fbc-9748w" [af8930c7-4faf-4d6d-b064-02a9a6ae3479] Running
	I0918 19:39:59.224967   15490 system_pods.go:89] "nvidia-device-plugin-daemonset-mpg2d" [6a45797b-d7b4-423b-93f3-b1a9a57eba5f] Running
	I0918 19:39:59.224973   15490 system_pods.go:89] "registry-66c9cd494c-vvhf9" [456d61ad-102d-4e2c-9b99-6bbce9fe2788] Running
	I0918 19:39:59.224981   15490 system_pods.go:89] "registry-proxy-vrcfx" [167c8d8a-ffcc-4e8d-be37-1288a0ac0e73] Running
	I0918 19:39:59.224991   15490 system_pods.go:89] "snapshot-controller-56fcc65765-clrrz" [bb8f5bc8-83cc-49e9-9cf1-4c45a423de87] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0918 19:39:59.225002   15490 system_pods.go:89] "snapshot-controller-56fcc65765-xctkv" [25335c1a-35f9-4d94-9cea-0afe13fa7191] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0918 19:39:59.225011   15490 system_pods.go:89] "storage-provisioner" [60dc227b-8c52-4c86-946d-00e7e31184d2] Running
	I0918 19:39:59.225021   15490 system_pods.go:89] "tiller-deploy-b48cc5f79-9vcgn" [3f9da820-37d4-44d6-a61e-f66ccc7d7588] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0918 19:39:59.225033   15490 system_pods.go:126] duration metric: took 10.401979ms to wait for k8s-apps to be running ...
	I0918 19:39:59.225048   15490 system_svc.go:44] waiting for kubelet service to be running ....
	I0918 19:39:59.225099   15490 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 19:39:59.244920   15490 system_svc.go:56] duration metric: took 19.850458ms WaitForService to wait for kubelet
	I0918 19:39:59.244968   15490 kubeadm.go:582] duration metric: took 54.057661504s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 19:39:59.244991   15490 node_conditions.go:102] verifying NodePressure condition ...
	I0918 19:39:59.248272   15490 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0918 19:39:59.248300   15490 node_conditions.go:123] node cpu capacity is 2
	I0918 19:39:59.248316   15490 node_conditions.go:105] duration metric: took 3.318244ms to run NodePressure ...
	I0918 19:39:59.248329   15490 start.go:241] waiting for startup goroutines ...
	I0918 19:39:59.327149   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:59.649561   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:59.826340   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:00.149578   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:00.324931   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:00.827086   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:00.830018   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:01.150035   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:01.325159   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:01.649973   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:01.837590   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:02.151257   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:02.325859   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:02.650298   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:02.825943   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:03.149707   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:03.324486   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:03.778059   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:03.880469   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:04.149868   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:04.325434   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:04.650160   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:04.831255   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:05.154491   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:05.327423   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:05.649461   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:05.829740   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:06.149393   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:06.324613   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:06.649449   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:06.835314   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:07.150355   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:07.325389   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:07.650286   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:07.824838   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:08.157874   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:08.324764   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:08.649185   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:08.825748   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:09.150359   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:09.325261   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:09.650484   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:10.266535   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:10.266688   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:10.401969   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:10.649966   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:10.826242   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:11.150183   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:11.325061   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:11.649491   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:11.825979   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:12.152606   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:12.355958   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:12.651194   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:12.826728   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:13.197953   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:13.325380   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:13.649474   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:13.825611   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:14.150075   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:14.331070   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:14.649324   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:14.825094   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:15.150706   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:15.335122   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:15.659976   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:15.906481   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:16.149750   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:16.327509   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:16.649027   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:16.825627   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:17.151248   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:17.325514   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:17.650381   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:17.825324   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:18.150092   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:18.325359   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:18.649682   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:18.829507   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:19.191744   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:19.325636   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:19.649196   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:19.826292   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:20.150261   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:20.324730   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:20.649835   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:20.825792   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:21.152234   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:21.324574   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:21.648716   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:21.825293   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:22.150179   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:22.324938   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:22.650290   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:22.825169   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:23.150407   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:23.327134   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:23.651432   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:23.899798   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:24.152797   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:24.327908   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:24.650492   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:24.825543   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:25.150396   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:25.326022   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:25.648852   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:25.826110   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:26.149645   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:26.325921   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:26.652329   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:26.826621   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:27.150254   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:27.328578   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:27.650094   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:27.826813   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:28.152180   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:28.325594   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:28.649509   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:28.825634   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:29.150389   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:29.325069   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:29.652351   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:29.890573   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:30.149374   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:30.325279   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:30.649296   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:30.824354   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:31.150498   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:31.333067   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:31.649557   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:31.837728   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:32.150433   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:32.329602   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:32.662487   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:32.826321   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:33.150873   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:33.325607   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:33.658907   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:33.830965   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:34.148653   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:34.326182   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:34.789559   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:34.889437   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:35.149562   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:35.325265   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:35.649315   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:35.826790   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:36.148988   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:36.325960   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:36.649365   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:36.824320   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:37.149765   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:37.326481   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:37.649235   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:37.825034   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:38.149237   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:38.324511   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:38.649652   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:38.825251   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:39.149952   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:39.325640   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:39.663322   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:40.049789   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:40.151681   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:40.329011   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:40.666191   15490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:40.828840   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:41.152825   15490 kapi.go:107] duration metric: took 1m24.007962483s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0918 19:40:41.333413   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:41.834077   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:42.325485   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:42.825540   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:43.326362   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:43.834450   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:44.324720   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:44.824970   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:45.326077   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:45.824806   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:46.383929   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:46.825873   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:47.324977   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:47.826356   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:48.326085   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:48.824857   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:49.324205   15490 kapi.go:107] duration metric: took 1m27.003769171s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0918 19:42:08.822813   15490 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0918 19:42:08.822837   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:42:09.319202   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:42:09.817853   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:42:10.317456   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:42:10.817676   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:42:11.317656   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:42:11.817186   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:42:12.319189   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:42:12.817632   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:42:13.317347   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:42:13.816555   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:42:14.319339   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:42:14.822052   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:42:15.318305   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:42:15.817674   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:42:16.319426   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:42:16.817693   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:42:17.317968   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:42:17.817097   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:42:18.317766   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:42:18.818153   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:42:19.316933   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:42:19.818027   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:42:20.317159   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:42:20.818249   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:42:21.317031   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:42:21.817931   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:42:22.318096   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:42:22.817854   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:42:23.318040   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:42:23.818510   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:42:24.318371   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:42:24.820198   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:42:25.319948   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:42:25.817613   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:42:26.317803   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:42:26.820570   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:42:27.317179   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:42:27.817440   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:42:28.317572   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:42:28.817887   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:42:29.317585   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:42:29.817661   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:42:30.318745   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:42:30.817316   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:42:31.317056   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:42:31.817697   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:42:32.317786   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:42:32.817523   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:42:33.316874   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:42:33.818282   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:42:34.318159   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:42:34.818134   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:42:35.317645   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:42:35.817547   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:42:36.318113   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:42:36.817407   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:42:37.317064   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:42:37.817060   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:42:38.317698   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:42:38.817863   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:42:39.318452   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:42:39.817741   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:42:40.317395   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:42:40.818472   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:42:41.317931   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:42:41.817765   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:42:42.318386   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:42:42.817557   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:42:43.316546   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:42:43.820886   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:42:44.318198   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:42:44.816630   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:42:45.317063   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:42:45.819251   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:42:46.317473   15490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:42:46.818955   15490 kapi.go:107] duration metric: took 3m23.005696425s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0918 19:42:46.820619   15490 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-656419 cluster.
	I0918 19:42:46.821950   15490 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0918 19:42:46.823222   15490 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0918 19:42:46.824755   15490 out.go:177] * Enabled addons: nvidia-device-plugin, storage-provisioner, cloud-spanner, inspektor-gadget, helm-tiller, ingress-dns, metrics-server, volcano, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0918 19:42:46.826294   15490 addons.go:510] duration metric: took 3m41.63897467s for enable addons: enabled=[nvidia-device-plugin storage-provisioner cloud-spanner inspektor-gadget helm-tiller ingress-dns metrics-server volcano yakd storage-provisioner-rancher volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0918 19:42:46.826349   15490 start.go:246] waiting for cluster config update ...
	I0918 19:42:46.826372   15490 start.go:255] writing updated cluster config ...
	I0918 19:42:46.826632   15490 ssh_runner.go:195] Run: rm -f paused
	I0918 19:42:46.903944   15490 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0918 19:42:46.905881   15490 out.go:177] * Done! kubectl is now configured to use "addons-656419" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 18 19:52:44 addons-656419 dockerd[1202]: time="2024-09-18T19:52:44.638412445Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 18 19:52:44 addons-656419 dockerd[1202]: time="2024-09-18T19:52:44.638490919Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 18 19:52:44 addons-656419 dockerd[1202]: time="2024-09-18T19:52:44.638504944Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 18 19:52:44 addons-656419 dockerd[1202]: time="2024-09-18T19:52:44.638832054Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 18 19:52:46 addons-656419 dockerd[1202]: time="2024-09-18T19:52:46.855253718Z" level=info msg="shim disconnected" id=8254928ab5b8c487223f2b7e2168e7068253f1f848228eef8cc5b11470a7bf0a namespace=moby
	Sep 18 19:52:46 addons-656419 dockerd[1195]: time="2024-09-18T19:52:46.855961082Z" level=info msg="ignoring event" container=8254928ab5b8c487223f2b7e2168e7068253f1f848228eef8cc5b11470a7bf0a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 18 19:52:46 addons-656419 dockerd[1202]: time="2024-09-18T19:52:46.856679028Z" level=warning msg="cleaning up after shim disconnected" id=8254928ab5b8c487223f2b7e2168e7068253f1f848228eef8cc5b11470a7bf0a namespace=moby
	Sep 18 19:52:46 addons-656419 dockerd[1202]: time="2024-09-18T19:52:46.857411965Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 18 19:52:47 addons-656419 dockerd[1195]: time="2024-09-18T19:52:47.338184788Z" level=info msg="ignoring event" container=bd36d209c3196e9666c452eca5c3246581ecfb257043cc7354f2d930c3e92edd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 18 19:52:47 addons-656419 dockerd[1202]: time="2024-09-18T19:52:47.353260055Z" level=info msg="shim disconnected" id=bd36d209c3196e9666c452eca5c3246581ecfb257043cc7354f2d930c3e92edd namespace=moby
	Sep 18 19:52:47 addons-656419 dockerd[1202]: time="2024-09-18T19:52:47.353342402Z" level=warning msg="cleaning up after shim disconnected" id=bd36d209c3196e9666c452eca5c3246581ecfb257043cc7354f2d930c3e92edd namespace=moby
	Sep 18 19:52:47 addons-656419 dockerd[1202]: time="2024-09-18T19:52:47.353353566Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 18 19:52:47 addons-656419 dockerd[1195]: time="2024-09-18T19:52:47.427987337Z" level=info msg="ignoring event" container=70526f8891c57af0c1a2e4f0244d2b65b2261e401a2e3d53979019f6f5c3cd12 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 18 19:52:47 addons-656419 dockerd[1202]: time="2024-09-18T19:52:47.432150181Z" level=info msg="shim disconnected" id=70526f8891c57af0c1a2e4f0244d2b65b2261e401a2e3d53979019f6f5c3cd12 namespace=moby
	Sep 18 19:52:47 addons-656419 dockerd[1202]: time="2024-09-18T19:52:47.432227594Z" level=warning msg="cleaning up after shim disconnected" id=70526f8891c57af0c1a2e4f0244d2b65b2261e401a2e3d53979019f6f5c3cd12 namespace=moby
	Sep 18 19:52:47 addons-656419 dockerd[1202]: time="2024-09-18T19:52:47.432239874Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 18 19:52:47 addons-656419 dockerd[1195]: time="2024-09-18T19:52:47.551974268Z" level=info msg="ignoring event" container=099b572e4c8ebb40ce9b730770b169a7d90b1578c45e6480bd38d854c03c8d06 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 18 19:52:47 addons-656419 dockerd[1202]: time="2024-09-18T19:52:47.555674553Z" level=info msg="shim disconnected" id=099b572e4c8ebb40ce9b730770b169a7d90b1578c45e6480bd38d854c03c8d06 namespace=moby
	Sep 18 19:52:47 addons-656419 dockerd[1202]: time="2024-09-18T19:52:47.556050201Z" level=warning msg="cleaning up after shim disconnected" id=099b572e4c8ebb40ce9b730770b169a7d90b1578c45e6480bd38d854c03c8d06 namespace=moby
	Sep 18 19:52:47 addons-656419 dockerd[1202]: time="2024-09-18T19:52:47.556230014Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 18 19:52:47 addons-656419 dockerd[1202]: time="2024-09-18T19:52:47.574552339Z" level=warning msg="cleanup warnings time=\"2024-09-18T19:52:47Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Sep 18 19:52:47 addons-656419 dockerd[1195]: time="2024-09-18T19:52:47.712389205Z" level=info msg="ignoring event" container=00a9cd515d41a7b97aca5bf177579811c799adb6d3905b1f4e875fcbe54b6793 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 18 19:52:47 addons-656419 dockerd[1202]: time="2024-09-18T19:52:47.712979583Z" level=info msg="shim disconnected" id=00a9cd515d41a7b97aca5bf177579811c799adb6d3905b1f4e875fcbe54b6793 namespace=moby
	Sep 18 19:52:47 addons-656419 dockerd[1202]: time="2024-09-18T19:52:47.713068475Z" level=warning msg="cleaning up after shim disconnected" id=00a9cd515d41a7b97aca5bf177579811c799adb6d3905b1f4e875fcbe54b6793 namespace=moby
	Sep 18 19:52:47 addons-656419 dockerd[1202]: time="2024-09-18T19:52:47.713078512Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	6a863e794f3b8       nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3                                                                4 seconds ago       Running             task-pv-container                        0                   a4cc54ee896b8       task-pv-pod-restore
	5966d0f731e94       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                                                  20 seconds ago      Running             hello-world-app                          0                   1e12a9a181496       hello-world-app-55bf9c44b4-rvzxq
	5065b44328ae4       nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf                                                                30 seconds ago      Running             nginx                                    0                   c43cc052acbc0       nginx
	c9282deba0afa       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                                 10 minutes ago      Running             gcp-auth                                 0                   34ea121a71337       gcp-auth-89d5ffd79-txph4
	f0d80f6ca4645       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          12 minutes ago      Running             csi-snapshotter                          0                   58b1f47aca11b       csi-hostpathplugin-c8vf5
	e6f63931fc453       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          12 minutes ago      Running             csi-provisioner                          0                   58b1f47aca11b       csi-hostpathplugin-c8vf5
	b08cd1833333f       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            12 minutes ago      Running             liveness-probe                           0                   58b1f47aca11b       csi-hostpathplugin-c8vf5
	74872e5a28067       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           12 minutes ago      Running             hostpath                                 0                   58b1f47aca11b       csi-hostpathplugin-c8vf5
	508cd04410c3c       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                12 minutes ago      Running             node-driver-registrar                    0                   58b1f47aca11b       csi-hostpathplugin-c8vf5
	2f0c0e2cf04b7       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              12 minutes ago      Running             csi-resizer                              0                   377c723d36f6f       csi-hostpath-resizer-0
	e67971e282bea       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             12 minutes ago      Running             csi-attacher                             0                   7b4b6ccbf9c9f       csi-hostpath-attacher-0
	bc5311de41d97       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   12 minutes ago      Running             csi-external-health-monitor-controller   0                   58b1f47aca11b       csi-hostpathplugin-c8vf5
	62eb4e374ad6b       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3                   12 minutes ago      Exited              patch                                    0                   f6a93cdea8558       ingress-nginx-admission-patch-mvrpr
	bdcdeef43e3c9       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3                   12 minutes ago      Exited              create                                   0                   ec8fe8344e952       ingress-nginx-admission-create-5wgqz
	e77f2b606c1b7       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      12 minutes ago      Running             volume-snapshot-controller               0                   1f6605bf60e21       snapshot-controller-56fcc65765-xctkv
	07ea56ebe293b       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      12 minutes ago      Running             volume-snapshot-controller               0                   b79c88f953f41       snapshot-controller-56fcc65765-clrrz
	5bdb1d25ab864       6e38f40d628db                                                                                                                                13 minutes ago      Running             storage-provisioner                      0                   23063a286a487       storage-provisioner
	7075b6c748dc4       c69fa2e9cbf5f                                                                                                                                13 minutes ago      Running             coredns                                  0                   6bfa8ffd7346e       coredns-7c65d6cfc9-bjkr7
	d97782ecb4605       60c005f310ff3                                                                                                                                13 minutes ago      Running             kube-proxy                               0                   a8fb17ba5656a       kube-proxy-qc45h
	a507ae3b3d13d       6bab7719df100                                                                                                                                13 minutes ago      Running             kube-apiserver                           0                   3e26e59c3211c       kube-apiserver-addons-656419
	2482781df2280       175ffd71cce3d                                                                                                                                13 minutes ago      Running             kube-controller-manager                  0                   0041a817874fd       kube-controller-manager-addons-656419
	1b840d819a39e       9aa1fad941575                                                                                                                                13 minutes ago      Running             kube-scheduler                           0                   1f1c3f5317197       kube-scheduler-addons-656419
	bfb379e4d4f0e       2e96e5913fc06                                                                                                                                13 minutes ago      Running             etcd                                     0                   de909d51970d0       etcd-addons-656419
	
	
	==> coredns [7075b6c748dc] <==
	[INFO] 127.0.0.1:36511 - 50047 "HINFO IN 3361721310504964671.7117542919912984241. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.063386259s
	[INFO] 10.244.0.7:53039 - 13737 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000990568s
	[INFO] 10.244.0.7:53039 - 51108 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000089728s
	[INFO] 10.244.0.7:44376 - 32008 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000094033s
	[INFO] 10.244.0.7:44376 - 19213 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000170503s
	[INFO] 10.244.0.7:50295 - 5430 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000215489s
	[INFO] 10.244.0.7:50295 - 11056 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000141654s
	[INFO] 10.244.0.7:57346 - 12140 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000300246s
	[INFO] 10.244.0.7:57346 - 24686 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000212747s
	[INFO] 10.244.0.7:47123 - 54052 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000253666s
	[INFO] 10.244.0.7:47123 - 28454 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000107824s
	[INFO] 10.244.0.7:35748 - 42959 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000198481s
	[INFO] 10.244.0.7:35748 - 4812 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000249549s
	[INFO] 10.244.0.7:37457 - 30238 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000146573s
	[INFO] 10.244.0.7:37457 - 2320 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000232216s
	[INFO] 10.244.0.7:53716 - 25896 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000068459s
	[INFO] 10.244.0.7:53716 - 44074 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000218932s
	[INFO] 10.244.0.26:45297 - 4566 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.001610161s
	[INFO] 10.244.0.26:48655 - 10832 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000149699s
	[INFO] 10.244.0.26:34810 - 3515 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000142009s
	[INFO] 10.244.0.26:54302 - 56912 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000727148s
	[INFO] 10.244.0.26:48665 - 14798 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000121673s
	[INFO] 10.244.0.26:44065 - 40881 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.001687372s
	[INFO] 10.244.0.26:36540 - 2448 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.003271132s
	[INFO] 10.244.0.26:60165 - 31676 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003588942s
	
	
	==> describe nodes <==
	Name:               addons-656419
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-656419
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91
	                    minikube.k8s.io/name=addons-656419
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_18T19_39_00_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-656419
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-656419"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 18 Sep 2024 19:38:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-656419
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 18 Sep 2024 19:52:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 18 Sep 2024 19:52:36 +0000   Wed, 18 Sep 2024 19:38:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 18 Sep 2024 19:52:36 +0000   Wed, 18 Sep 2024 19:38:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 18 Sep 2024 19:52:36 +0000   Wed, 18 Sep 2024 19:38:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 18 Sep 2024 19:52:36 +0000   Wed, 18 Sep 2024 19:39:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.154
	  Hostname:    addons-656419
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 35f14c3309d9430491c4710f1c251923
	  System UUID:                35f14c33-09d9-4304-91c4-710f1c251923
	  Boot ID:                    e4b66f89-ba29-4e19-b361-747d31f30da5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (17 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m16s
	  default                     hello-world-app-55bf9c44b4-rvzxq         0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         34s
	  default                     task-pv-pod-restore                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5s
	  gcp-auth                    gcp-auth-89d5ffd79-txph4                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-7c65d6cfc9-bjkr7                 100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     13m
	  kube-system                 csi-hostpath-attacher-0                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 csi-hostpath-resizer-0                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 csi-hostpathplugin-c8vf5                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 etcd-addons-656419                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         13m
	  kube-system                 kube-apiserver-addons-656419             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-addons-656419    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-qc45h                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-addons-656419             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 snapshot-controller-56fcc65765-clrrz     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 snapshot-controller-56fcc65765-xctkv     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node addons-656419 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node addons-656419 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node addons-656419 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m                kubelet          Node addons-656419 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m                kubelet          Node addons-656419 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m                kubelet          Node addons-656419 status is now: NodeHasSufficientPID
	  Normal  NodeReady                13m                kubelet          Node addons-656419 status is now: NodeReady
	  Normal  RegisteredNode           13m                node-controller  Node addons-656419 event: Registered Node addons-656419 in Controller
	
	
	==> dmesg <==
	[  +5.615683] kauditd_printk_skb: 17 callbacks suppressed
	[  +7.664492] kauditd_printk_skb: 23 callbacks suppressed
	[  +6.330737] kauditd_printk_skb: 16 callbacks suppressed
	[  +5.999166] kauditd_printk_skb: 4 callbacks suppressed
	[Sep18 19:41] kauditd_printk_skb: 28 callbacks suppressed
	[  +7.797006] kauditd_printk_skb: 28 callbacks suppressed
	[Sep18 19:42] kauditd_printk_skb: 9 callbacks suppressed
	[ +13.766841] kauditd_printk_skb: 28 callbacks suppressed
	[ +13.714470] kauditd_printk_skb: 40 callbacks suppressed
	[  +5.603809] kauditd_printk_skb: 7 callbacks suppressed
	[Sep18 19:43] kauditd_printk_skb: 2 callbacks suppressed
	[ +17.847904] kauditd_printk_skb: 20 callbacks suppressed
	[  +6.976957] kauditd_printk_skb: 21 callbacks suppressed
	[ +13.177340] kauditd_printk_skb: 28 callbacks suppressed
	[Sep18 19:51] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.154392] kauditd_printk_skb: 44 callbacks suppressed
	[  +5.225489] kauditd_printk_skb: 50 callbacks suppressed
	[  +5.296816] kauditd_printk_skb: 64 callbacks suppressed
	[  +5.682695] kauditd_printk_skb: 33 callbacks suppressed
	[Sep18 19:52] kauditd_printk_skb: 2 callbacks suppressed
	[ +12.187004] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.655571] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.934675] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.511136] kauditd_printk_skb: 27 callbacks suppressed
	[ +12.020476] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [bfb379e4d4f0] <==
	{"level":"info","ts":"2024-09-18T19:40:34.759168Z","caller":"traceutil/trace.go:171","msg":"trace[390487392] linearizableReadLoop","detail":"{readStateIndex:1298; appliedIndex:1297; }","duration":"137.837129ms","start":"2024-09-18T19:40:34.621302Z","end":"2024-09-18T19:40:34.759139Z","steps":["trace[390487392] 'read index received'  (duration: 137.698294ms)","trace[390487392] 'applied index is now lower than readState.Index'  (duration: 138.366µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-18T19:40:34.759305Z","caller":"traceutil/trace.go:171","msg":"trace[438930098] transaction","detail":"{read_only:false; response_revision:1263; number_of_response:1; }","duration":"297.373336ms","start":"2024-09-18T19:40:34.461926Z","end":"2024-09-18T19:40:34.759299Z","steps":["trace[438930098] 'process raft request'  (duration: 297.091382ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-18T19:40:34.759459Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"138.104212ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-18T19:40:34.759483Z","caller":"traceutil/trace.go:171","msg":"trace[1611665561] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1263; }","duration":"138.178994ms","start":"2024-09-18T19:40:34.621298Z","end":"2024-09-18T19:40:34.759477Z","steps":["trace[1611665561] 'agreement among raft nodes before linearized reading'  (duration: 138.087955ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-18T19:40:40.010215Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"244.495421ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2024-09-18T19:40:40.010296Z","caller":"traceutil/trace.go:171","msg":"trace[1628555448] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1273; }","duration":"244.58647ms","start":"2024-09-18T19:40:39.765698Z","end":"2024-09-18T19:40:40.010285Z","steps":["trace[1628555448] 'range keys from in-memory index tree'  (duration: 244.352426ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-18T19:40:40.010680Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"325.063786ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gadget/gadget-hwhdt\" ","response":"range_response_count:1 size:12479"}
	{"level":"info","ts":"2024-09-18T19:40:40.010720Z","caller":"traceutil/trace.go:171","msg":"trace[1880782352] range","detail":"{range_begin:/registry/pods/gadget/gadget-hwhdt; range_end:; response_count:1; response_revision:1273; }","duration":"325.107191ms","start":"2024-09-18T19:40:39.685605Z","end":"2024-09-18T19:40:40.010712Z","steps":["trace[1880782352] 'range keys from in-memory index tree'  (duration: 324.967371ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-18T19:40:40.010756Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-18T19:40:39.685564Z","time spent":"325.184976ms","remote":"127.0.0.1:44308","response type":"/etcdserverpb.KV/Range","request count":0,"request size":36,"response count":1,"response size":12502,"request content":"key:\"/registry/pods/gadget/gadget-hwhdt\" "}
	{"level":"warn","ts":"2024-09-18T19:40:40.010988Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"324.025631ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/gadget/gadget-hwhdt.17f66dfbd042477c\" ","response":"range_response_count:1 size:779"}
	{"level":"info","ts":"2024-09-18T19:40:40.011022Z","caller":"traceutil/trace.go:171","msg":"trace[176551861] range","detail":"{range_begin:/registry/events/gadget/gadget-hwhdt.17f66dfbd042477c; range_end:; response_count:1; response_revision:1273; }","duration":"324.061957ms","start":"2024-09-18T19:40:39.686954Z","end":"2024-09-18T19:40:40.011016Z","steps":["trace[176551861] 'range keys from in-memory index tree'  (duration: 323.912287ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-18T19:40:40.011056Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-18T19:40:39.686922Z","time spent":"324.128517ms","remote":"127.0.0.1:48892","response type":"/etcdserverpb.KV/Range","request count":0,"request size":55,"response count":1,"response size":802,"request content":"key:\"/registry/events/gadget/gadget-hwhdt.17f66dfbd042477c\" "}
	{"level":"warn","ts":"2024-09-18T19:40:40.011400Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"222.232013ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-18T19:40:40.011436Z","caller":"traceutil/trace.go:171","msg":"trace[696854276] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1273; }","duration":"222.270741ms","start":"2024-09-18T19:40:39.789159Z","end":"2024-09-18T19:40:40.011429Z","steps":["trace[696854276] 'range keys from in-memory index tree'  (duration: 222.162723ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-18T19:40:40.011505Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"214.998935ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-18T19:40:40.011518Z","caller":"traceutil/trace.go:171","msg":"trace[1871984817] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1273; }","duration":"215.013842ms","start":"2024-09-18T19:40:39.796500Z","end":"2024-09-18T19:40:40.011514Z","steps":["trace[1871984817] 'range keys from in-memory index tree'  (duration: 214.929933ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-18T19:48:55.985096Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1909}
	{"level":"info","ts":"2024-09-18T19:48:56.119292Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1909,"took":"132.054171ms","hash":1982860295,"current-db-size-bytes":9236480,"current-db-size":"9.2 MB","current-db-size-in-use-bytes":4972544,"current-db-size-in-use":"5.0 MB"}
	{"level":"info","ts":"2024-09-18T19:48:56.119748Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1982860295,"revision":1909,"compact-revision":-1}
	{"level":"info","ts":"2024-09-18T19:51:40.220027Z","caller":"traceutil/trace.go:171","msg":"trace[856656014] linearizableReadLoop","detail":"{readStateIndex:2789; appliedIndex:2788; }","duration":"170.986646ms","start":"2024-09-18T19:51:40.048999Z","end":"2024-09-18T19:51:40.219986Z","steps":["trace[856656014] 'read index received'  (duration: 170.768899ms)","trace[856656014] 'applied index is now lower than readState.Index'  (duration: 217.27µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-18T19:51:40.220176Z","caller":"traceutil/trace.go:171","msg":"trace[641011916] transaction","detail":"{read_only:false; response_revision:2607; number_of_response:1; }","duration":"253.784147ms","start":"2024-09-18T19:51:39.966371Z","end":"2024-09-18T19:51:40.220155Z","steps":["trace[641011916] 'process raft request'  (duration: 253.437601ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-18T19:51:40.220299Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"171.25098ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-18T19:51:40.220334Z","caller":"traceutil/trace.go:171","msg":"trace[2144503042] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2607; }","duration":"171.346054ms","start":"2024-09-18T19:51:40.048979Z","end":"2024-09-18T19:51:40.220325Z","steps":["trace[2144503042] 'agreement among raft nodes before linearized reading'  (duration: 171.190574ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-18T19:51:40.220500Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.936352ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" ","response":"range_response_count:1 size:554"}
	{"level":"info","ts":"2024-09-18T19:51:40.220525Z","caller":"traceutil/trace.go:171","msg":"trace[1043382379] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:2607; }","duration":"130.966454ms","start":"2024-09-18T19:51:40.089551Z","end":"2024-09-18T19:51:40.220517Z","steps":["trace[1043382379] 'agreement among raft nodes before linearized reading'  (duration: 130.869461ms)"],"step_count":1}
	
	
	==> gcp-auth [c9282deba0af] <==
	2024/09/18 19:43:32 Ready to write response ...
	2024/09/18 19:51:35 Ready to marshal response ...
	2024/09/18 19:51:35 Ready to write response ...
	2024/09/18 19:51:35 Ready to marshal response ...
	2024/09/18 19:51:35 Ready to write response ...
	2024/09/18 19:51:35 Ready to marshal response ...
	2024/09/18 19:51:35 Ready to write response ...
	2024/09/18 19:51:39 Ready to marshal response ...
	2024/09/18 19:51:39 Ready to write response ...
	2024/09/18 19:51:41 Ready to marshal response ...
	2024/09/18 19:51:41 Ready to write response ...
	2024/09/18 19:51:41 Ready to marshal response ...
	2024/09/18 19:51:41 Ready to write response ...
	2024/09/18 19:51:46 Ready to marshal response ...
	2024/09/18 19:51:46 Ready to write response ...
	2024/09/18 19:51:54 Ready to marshal response ...
	2024/09/18 19:51:54 Ready to write response ...
	2024/09/18 19:52:14 Ready to marshal response ...
	2024/09/18 19:52:14 Ready to write response ...
	2024/09/18 19:52:19 Ready to marshal response ...
	2024/09/18 19:52:19 Ready to write response ...
	2024/09/18 19:52:25 Ready to marshal response ...
	2024/09/18 19:52:25 Ready to write response ...
	2024/09/18 19:52:43 Ready to marshal response ...
	2024/09/18 19:52:43 Ready to write response ...
	
	
	==> kernel <==
	 19:52:48 up 14 min,  0 users,  load average: 0.55, 0.46, 0.47
	Linux addons-656419 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [a507ae3b3d13] <==
	I0918 19:43:22.359141       1 handler.go:286] Adding GroupVersion batch.volcano.sh v1alpha1 to ResourceManager
	I0918 19:43:22.395407       1 handler.go:286] Adding GroupVersion bus.volcano.sh v1alpha1 to ResourceManager
	E0918 19:43:22.660948       1 watch.go:250] "Unhandled Error" err="write tcp 192.168.39.154:8443->10.244.0.16:50680: write: connection reset by peer" logger="UnhandledError"
	I0918 19:43:22.955402       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0918 19:43:23.117630       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0918 19:43:23.201364       1 handler.go:286] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
	I0918 19:43:23.302820       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	W0918 19:43:23.493682       1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
	I0918 19:43:23.571185       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0918 19:43:23.612069       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0918 19:43:23.915429       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0918 19:43:24.270616       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0918 19:43:24.303639       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0918 19:43:24.397958       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0918 19:43:24.455632       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0918 19:43:24.948114       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0918 19:43:25.058557       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0918 19:51:35.451034       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.98.225.87"}
	I0918 19:51:57.976807       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0918 19:51:59.016476       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	E0918 19:52:10.080567       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0918 19:52:14.267996       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0918 19:52:14.529491       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.100.160.82"}
	I0918 19:52:26.104935       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.104.67.191"}
	I0918 19:52:27.822618       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [2482781df228] <==
	I0918 19:52:18.870553       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	W0918 19:52:19.562265       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:52:19.562633       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0918 19:52:25.791922       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:52:25.791966       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0918 19:52:25.937715       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="56.028165ms"
	I0918 19:52:25.966066       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="28.223047ms"
	I0918 19:52:25.967950       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="1.823943ms"
	I0918 19:52:28.551688       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0918 19:52:28.557633       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="4.033µs"
	I0918 19:52:28.565154       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	I0918 19:52:28.757343       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="11.319515ms"
	I0918 19:52:28.757755       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="108.314µs"
	W0918 19:52:30.240576       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:52:30.240632       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0918 19:52:30.368612       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:52:30.368834       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0918 19:52:30.728117       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:52:30.728419       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0918 19:52:36.153115       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-656419"
	I0918 19:52:38.482306       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="ingress-nginx"
	W0918 19:52:39.991834       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:52:39.991943       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0918 19:52:42.726740       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="local-path-storage"
	I0918 19:52:47.270688       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="4.78µs"
	
	
	==> kube-proxy [d97782ecb460] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0918 19:39:06.501159       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0918 19:39:06.584531       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.154"]
	E0918 19:39:06.584607       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0918 19:39:06.778709       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0918 19:39:06.778743       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0918 19:39:06.778765       1 server_linux.go:169] "Using iptables Proxier"
	I0918 19:39:06.782129       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0918 19:39:06.782461       1 server.go:483] "Version info" version="v1.31.1"
	I0918 19:39:06.782473       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0918 19:39:06.783857       1 config.go:199] "Starting service config controller"
	I0918 19:39:06.784048       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0918 19:39:06.784086       1 config.go:105] "Starting endpoint slice config controller"
	I0918 19:39:06.784090       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0918 19:39:06.784845       1 config.go:328] "Starting node config controller"
	I0918 19:39:06.784852       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0918 19:39:06.885130       1 shared_informer.go:320] Caches are synced for service config
	I0918 19:39:06.885223       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0918 19:39:06.885058       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [1b840d819a39] <==
	E0918 19:38:57.559652       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0918 19:38:57.552502       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0918 19:38:57.559938       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0918 19:38:57.552537       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0918 19:38:57.560132       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0918 19:38:57.556706       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0918 19:38:58.390547       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0918 19:38:58.390608       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0918 19:38:58.393017       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0918 19:38:58.393166       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0918 19:38:58.470548       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0918 19:38:58.470678       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 19:38:58.482497       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0918 19:38:58.482634       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0918 19:38:58.489477       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0918 19:38:58.489610       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 19:38:58.609917       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0918 19:38:58.610143       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 19:38:58.726847       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0918 19:38:58.727066       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0918 19:38:58.809575       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0918 19:38:58.809697       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 19:38:58.816629       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0918 19:38:58.816842       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0918 19:39:01.532613       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 18 19:52:43 addons-656419 kubelet[1977]: I0918 19:52:43.302222    1977 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e3d8f7a7-effe-4b43-ae47-9f1df569248c\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^91375157-75f7-11ef-a01f-52c11fd0e115\") pod \"task-pv-pod-restore\" (UID: \"a38a6200-d8dc-4baf-8a82-8c2f0e319417\") " pod="default/task-pv-pod-restore"
	Sep 18 19:52:43 addons-656419 kubelet[1977]: I0918 19:52:43.426731    1977 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e3d8f7a7-effe-4b43-ae47-9f1df569248c\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^91375157-75f7-11ef-a01f-52c11fd0e115\") pod \"task-pv-pod-restore\" (UID: \"a38a6200-d8dc-4baf-8a82-8c2f0e319417\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/hostpath.csi.k8s.io/ae27a975b97cc684ae63050c55efd17dd361e54cd14f1ebfd36ea73b898b7e2b/globalmount\"" pod="default/task-pv-pod-restore"
	Sep 18 19:52:44 addons-656419 kubelet[1977]: E0918 19:52:44.144519    1977 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="088d3788-15da-49bf-b606-3731a41576f8"
	Sep 18 19:52:46 addons-656419 kubelet[1977]: I0918 19:52:46.737114    1977 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/task-pv-pod-restore" podStartSLOduration=3.013335541 podStartE2EDuration="3.737089939s" podCreationTimestamp="2024-09-18 19:52:43 +0000 UTC" firstStartedPulling="2024-09-18 19:52:43.833175337 +0000 UTC m=+823.804420971" lastFinishedPulling="2024-09-18 19:52:44.556929734 +0000 UTC m=+824.528175369" observedRunningTime="2024-09-18 19:52:45.104135734 +0000 UTC m=+825.075381392" watchObservedRunningTime="2024-09-18 19:52:46.737089939 +0000 UTC m=+826.708335593"
	Sep 18 19:52:46 addons-656419 kubelet[1977]: I0918 19:52:46.934971    1977 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pl7wl\" (UniqueName: \"kubernetes.io/projected/933e2299-a2b8-41db-bba1-5c4466fb6872-kube-api-access-pl7wl\") pod \"933e2299-a2b8-41db-bba1-5c4466fb6872\" (UID: \"933e2299-a2b8-41db-bba1-5c4466fb6872\") "
	Sep 18 19:52:46 addons-656419 kubelet[1977]: I0918 19:52:46.935023    1977 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/933e2299-a2b8-41db-bba1-5c4466fb6872-gcp-creds\") pod \"933e2299-a2b8-41db-bba1-5c4466fb6872\" (UID: \"933e2299-a2b8-41db-bba1-5c4466fb6872\") "
	Sep 18 19:52:46 addons-656419 kubelet[1977]: I0918 19:52:46.935154    1977 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/933e2299-a2b8-41db-bba1-5c4466fb6872-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "933e2299-a2b8-41db-bba1-5c4466fb6872" (UID: "933e2299-a2b8-41db-bba1-5c4466fb6872"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 18 19:52:46 addons-656419 kubelet[1977]: I0918 19:52:46.939666    1977 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/933e2299-a2b8-41db-bba1-5c4466fb6872-kube-api-access-pl7wl" (OuterVolumeSpecName: "kube-api-access-pl7wl") pod "933e2299-a2b8-41db-bba1-5c4466fb6872" (UID: "933e2299-a2b8-41db-bba1-5c4466fb6872"). InnerVolumeSpecName "kube-api-access-pl7wl". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 18 19:52:47 addons-656419 kubelet[1977]: I0918 19:52:47.035971    1977 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/933e2299-a2b8-41db-bba1-5c4466fb6872-gcp-creds\") on node \"addons-656419\" DevicePath \"\""
	Sep 18 19:52:47 addons-656419 kubelet[1977]: I0918 19:52:47.036005    1977 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-pl7wl\" (UniqueName: \"kubernetes.io/projected/933e2299-a2b8-41db-bba1-5c4466fb6872-kube-api-access-pl7wl\") on node \"addons-656419\" DevicePath \"\""
	Sep 18 19:52:47 addons-656419 kubelet[1977]: I0918 19:52:47.744513    1977 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4648k\" (UniqueName: \"kubernetes.io/projected/456d61ad-102d-4e2c-9b99-6bbce9fe2788-kube-api-access-4648k\") pod \"456d61ad-102d-4e2c-9b99-6bbce9fe2788\" (UID: \"456d61ad-102d-4e2c-9b99-6bbce9fe2788\") "
	Sep 18 19:52:47 addons-656419 kubelet[1977]: I0918 19:52:47.749508    1977 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/456d61ad-102d-4e2c-9b99-6bbce9fe2788-kube-api-access-4648k" (OuterVolumeSpecName: "kube-api-access-4648k") pod "456d61ad-102d-4e2c-9b99-6bbce9fe2788" (UID: "456d61ad-102d-4e2c-9b99-6bbce9fe2788"). InnerVolumeSpecName "kube-api-access-4648k". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 18 19:52:47 addons-656419 kubelet[1977]: I0918 19:52:47.846779    1977 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtv92\" (UniqueName: \"kubernetes.io/projected/167c8d8a-ffcc-4e8d-be37-1288a0ac0e73-kube-api-access-jtv92\") pod \"167c8d8a-ffcc-4e8d-be37-1288a0ac0e73\" (UID: \"167c8d8a-ffcc-4e8d-be37-1288a0ac0e73\") "
	Sep 18 19:52:47 addons-656419 kubelet[1977]: I0918 19:52:47.847090    1977 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-4648k\" (UniqueName: \"kubernetes.io/projected/456d61ad-102d-4e2c-9b99-6bbce9fe2788-kube-api-access-4648k\") on node \"addons-656419\" DevicePath \"\""
	Sep 18 19:52:47 addons-656419 kubelet[1977]: I0918 19:52:47.849795    1977 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/167c8d8a-ffcc-4e8d-be37-1288a0ac0e73-kube-api-access-jtv92" (OuterVolumeSpecName: "kube-api-access-jtv92") pod "167c8d8a-ffcc-4e8d-be37-1288a0ac0e73" (UID: "167c8d8a-ffcc-4e8d-be37-1288a0ac0e73"). InnerVolumeSpecName "kube-api-access-jtv92". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 18 19:52:47 addons-656419 kubelet[1977]: I0918 19:52:47.947441    1977 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-jtv92\" (UniqueName: \"kubernetes.io/projected/167c8d8a-ffcc-4e8d-be37-1288a0ac0e73-kube-api-access-jtv92\") on node \"addons-656419\" DevicePath \"\""
	Sep 18 19:52:48 addons-656419 kubelet[1977]: I0918 19:52:48.177612    1977 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="933e2299-a2b8-41db-bba1-5c4466fb6872" path="/var/lib/kubelet/pods/933e2299-a2b8-41db-bba1-5c4466fb6872/volumes"
	Sep 18 19:52:48 addons-656419 kubelet[1977]: I0918 19:52:48.179986    1977 scope.go:117] "RemoveContainer" containerID="70526f8891c57af0c1a2e4f0244d2b65b2261e401a2e3d53979019f6f5c3cd12"
	Sep 18 19:52:48 addons-656419 kubelet[1977]: I0918 19:52:48.227558    1977 scope.go:117] "RemoveContainer" containerID="70526f8891c57af0c1a2e4f0244d2b65b2261e401a2e3d53979019f6f5c3cd12"
	Sep 18 19:52:48 addons-656419 kubelet[1977]: E0918 19:52:48.228844    1977 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 70526f8891c57af0c1a2e4f0244d2b65b2261e401a2e3d53979019f6f5c3cd12" containerID="70526f8891c57af0c1a2e4f0244d2b65b2261e401a2e3d53979019f6f5c3cd12"
	Sep 18 19:52:48 addons-656419 kubelet[1977]: I0918 19:52:48.228921    1977 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"70526f8891c57af0c1a2e4f0244d2b65b2261e401a2e3d53979019f6f5c3cd12"} err="failed to get container status \"70526f8891c57af0c1a2e4f0244d2b65b2261e401a2e3d53979019f6f5c3cd12\": rpc error: code = Unknown desc = Error response from daemon: No such container: 70526f8891c57af0c1a2e4f0244d2b65b2261e401a2e3d53979019f6f5c3cd12"
	Sep 18 19:52:48 addons-656419 kubelet[1977]: I0918 19:52:48.228950    1977 scope.go:117] "RemoveContainer" containerID="bd36d209c3196e9666c452eca5c3246581ecfb257043cc7354f2d930c3e92edd"
	Sep 18 19:52:48 addons-656419 kubelet[1977]: I0918 19:52:48.282024    1977 scope.go:117] "RemoveContainer" containerID="bd36d209c3196e9666c452eca5c3246581ecfb257043cc7354f2d930c3e92edd"
	Sep 18 19:52:48 addons-656419 kubelet[1977]: E0918 19:52:48.283774    1977 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: bd36d209c3196e9666c452eca5c3246581ecfb257043cc7354f2d930c3e92edd" containerID="bd36d209c3196e9666c452eca5c3246581ecfb257043cc7354f2d930c3e92edd"
	Sep 18 19:52:48 addons-656419 kubelet[1977]: I0918 19:52:48.284296    1977 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"bd36d209c3196e9666c452eca5c3246581ecfb257043cc7354f2d930c3e92edd"} err="failed to get container status \"bd36d209c3196e9666c452eca5c3246581ecfb257043cc7354f2d930c3e92edd\": rpc error: code = Unknown desc = Error response from daemon: No such container: bd36d209c3196e9666c452eca5c3246581ecfb257043cc7354f2d930c3e92edd"
	
	
	==> storage-provisioner [5bdb1d25ab86] <==
	I0918 19:39:16.587705       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0918 19:39:16.630331       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0918 19:39:16.630400       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0918 19:39:16.715430       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0918 19:39:16.722333       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-656419_1d1ec36e-65db-48d3-81ec-22de03d530ec!
	I0918 19:39:16.749305       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f2230d6e-60f0-4aa4-af2d-d26d5ec03edc", APIVersion:"v1", ResourceVersion:"722", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-656419_1d1ec36e-65db-48d3-81ec-22de03d530ec became leader
	I0918 19:39:17.124102       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-656419_1d1ec36e-65db-48d3-81ec-22de03d530ec!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-656419 -n addons-656419
helpers_test.go:261: (dbg) Run:  kubectl --context addons-656419 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-656419 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-656419 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-656419/192.168.39.154
	Start Time:       Wed, 18 Sep 2024 19:43:32 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.29
	IPs:
	  IP:  10.244.0.29
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xwzrh (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-xwzrh:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason          Age                    From               Message
	  ----     ------          ----                   ----               -------
	  Normal   Scheduled       9m17s                  default-scheduler  Successfully assigned default/busybox to addons-656419
	  Normal   SandboxChanged  9m16s                  kubelet            Pod sandbox changed, it will be killed and re-created.
	  Warning  Failed          7m59s (x6 over 9m15s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling         7m46s (x4 over 9m17s)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed          7m46s (x4 over 9m17s)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed          7m46s (x4 over 9m17s)  kubelet            Error: ErrImagePull
	  Normal   BackOff         4m7s (x21 over 9m15s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (74.90s)

                                                
                                    

Test pass (309/341)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 10.56
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.1/json-events 3.81
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.06
18 TestDownloadOnly/v1.31.1/DeleteAll 0.14
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.6
22 TestOffline 101.86
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 279.13
29 TestAddons/serial/Volcano 44.98
31 TestAddons/serial/GCPAuth/Namespaces 0.12
34 TestAddons/parallel/Ingress 21.55
35 TestAddons/parallel/InspektorGadget 10.81
36 TestAddons/parallel/MetricsServer 6.81
37 TestAddons/parallel/HelmTiller 11.41
39 TestAddons/parallel/CSI 57.77
40 TestAddons/parallel/Headlamp 20.06
41 TestAddons/parallel/CloudSpanner 6.49
42 TestAddons/parallel/LocalPath 56.54
43 TestAddons/parallel/NvidiaDevicePlugin 6.66
44 TestAddons/parallel/Yakd 10.69
45 TestAddons/StoppedEnableDisable 8.6
46 TestCertOptions 63.24
47 TestCertExpiration 335.13
48 TestDockerFlags 60.93
49 TestForceSystemdFlag 85.45
50 TestForceSystemdEnv 88.08
52 TestKVMDriverInstallOrUpdate 5.5
56 TestErrorSpam/setup 49.23
57 TestErrorSpam/start 0.35
58 TestErrorSpam/status 0.75
59 TestErrorSpam/pause 1.26
60 TestErrorSpam/unpause 1.47
61 TestErrorSpam/stop 16.29
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 91.31
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 42.94
68 TestFunctional/serial/KubeContext 0.04
69 TestFunctional/serial/KubectlGetPods 0.08
72 TestFunctional/serial/CacheCmd/cache/add_remote 3.05
73 TestFunctional/serial/CacheCmd/cache/add_local 1.34
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
75 TestFunctional/serial/CacheCmd/cache/list 0.05
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.19
78 TestFunctional/serial/CacheCmd/cache/delete 0.09
79 TestFunctional/serial/MinikubeKubectlCmd 0.14
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
81 TestFunctional/serial/ExtraConfig 41.74
82 TestFunctional/serial/ComponentHealth 0.07
83 TestFunctional/serial/LogsCmd 1.16
84 TestFunctional/serial/LogsFileCmd 1.04
85 TestFunctional/serial/InvalidService 4.85
87 TestFunctional/parallel/ConfigCmd 0.34
88 TestFunctional/parallel/DashboardCmd 12.71
89 TestFunctional/parallel/DryRun 0.29
90 TestFunctional/parallel/InternationalLanguage 0.15
91 TestFunctional/parallel/StatusCmd 0.8
95 TestFunctional/parallel/ServiceCmdConnect 25.61
96 TestFunctional/parallel/AddonsCmd 0.13
97 TestFunctional/parallel/PersistentVolumeClaim 50.9
99 TestFunctional/parallel/SSHCmd 0.44
100 TestFunctional/parallel/CpCmd 1.33
101 TestFunctional/parallel/MySQL 32.25
102 TestFunctional/parallel/FileSync 0.22
103 TestFunctional/parallel/CertSync 1.34
107 TestFunctional/parallel/NodeLabels 0.08
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.23
111 TestFunctional/parallel/License 0.21
112 TestFunctional/parallel/Version/short 0.05
113 TestFunctional/parallel/Version/components 0.81
123 TestFunctional/parallel/ImageCommands/ImageListShort 0.21
124 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
125 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
126 TestFunctional/parallel/ImageCommands/ImageListYaml 0.25
127 TestFunctional/parallel/ImageCommands/ImageBuild 3.84
128 TestFunctional/parallel/ImageCommands/Setup 1.55
129 TestFunctional/parallel/DockerEnv/bash 0.88
130 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
131 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
132 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
133 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.18
134 TestFunctional/parallel/ProfileCmd/profile_not_create 0.42
135 TestFunctional/parallel/ProfileCmd/profile_list 0.38
136 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.89
137 TestFunctional/parallel/ProfileCmd/profile_json_output 0.37
138 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.62
139 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.39
140 TestFunctional/parallel/ImageCommands/ImageRemove 0.43
141 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.82
142 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.43
143 TestFunctional/parallel/ServiceCmd/DeployApp 21.32
144 TestFunctional/parallel/ServiceCmd/List 0.49
145 TestFunctional/parallel/MountCmd/any-port 7.83
146 TestFunctional/parallel/ServiceCmd/JSONOutput 0.5
147 TestFunctional/parallel/ServiceCmd/HTTPS 0.66
148 TestFunctional/parallel/ServiceCmd/Format 0.36
149 TestFunctional/parallel/ServiceCmd/URL 0.32
150 TestFunctional/parallel/MountCmd/specific-port 2
151 TestFunctional/parallel/MountCmd/VerifyCleanup 1.33
152 TestFunctional/delete_echo-server_images 0.04
153 TestFunctional/delete_my-image_image 0.02
154 TestFunctional/delete_minikube_cached_images 0.02
155 TestGvisorAddon 324.84
158 TestMultiControlPlane/serial/StartCluster 229.31
159 TestMultiControlPlane/serial/DeployApp 5.91
160 TestMultiControlPlane/serial/PingHostFromPods 1.35
161 TestMultiControlPlane/serial/AddWorkerNode 72.11
162 TestMultiControlPlane/serial/NodeLabels 0.09
163 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.95
164 TestMultiControlPlane/serial/CopyFile 14.23
165 TestMultiControlPlane/serial/StopSecondaryNode 14.01
166 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.7
167 TestMultiControlPlane/serial/RestartSecondaryNode 49.42
168 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.91
169 TestMultiControlPlane/serial/RestartClusterKeepsNodes 202.21
170 TestMultiControlPlane/serial/DeleteSecondaryNode 8.04
171 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.66
172 TestMultiControlPlane/serial/StopCluster 38.36
173 TestMultiControlPlane/serial/RestartCluster 119.86
174 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.68
175 TestMultiControlPlane/serial/AddSecondaryNode 84.2
176 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.93
179 TestImageBuild/serial/Setup 53.29
180 TestImageBuild/serial/NormalBuild 2.45
181 TestImageBuild/serial/BuildWithBuildArg 1.46
182 TestImageBuild/serial/BuildWithDockerIgnore 1
183 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.91
187 TestJSONOutput/start/Command 92.53
188 TestJSONOutput/start/Audit 0
190 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/pause/Command 0.62
194 TestJSONOutput/pause/Audit 0
196 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/unpause/Command 0.55
200 TestJSONOutput/unpause/Audit 0
202 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
205 TestJSONOutput/stop/Command 12.66
206 TestJSONOutput/stop/Audit 0
208 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
209 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
210 TestErrorJSONOutput 0.21
215 TestMainNoArgs 0.05
216 TestMinikubeProfile 112.85
219 TestMountStart/serial/StartWithMountFirst 32.16
220 TestMountStart/serial/VerifyMountFirst 0.62
221 TestMountStart/serial/StartWithMountSecond 32.84
222 TestMountStart/serial/VerifyMountSecond 0.38
223 TestMountStart/serial/DeleteFirst 0.89
224 TestMountStart/serial/VerifyMountPostDelete 0.38
225 TestMountStart/serial/Stop 2.28
226 TestMountStart/serial/RestartStopped 25.7
227 TestMountStart/serial/VerifyMountPostStop 0.39
230 TestMultiNode/serial/FreshStart2Nodes 137.73
231 TestMultiNode/serial/DeployApp2Nodes 4.18
232 TestMultiNode/serial/PingHostFrom2Pods 0.84
233 TestMultiNode/serial/AddNode 58.68
234 TestMultiNode/serial/MultiNodeLabels 0.07
235 TestMultiNode/serial/ProfileList 0.6
236 TestMultiNode/serial/CopyFile 7.7
237 TestMultiNode/serial/StopNode 3.42
238 TestMultiNode/serial/StartAfterStop 43.51
239 TestMultiNode/serial/RestartKeepsNodes 192.27
240 TestMultiNode/serial/DeleteNode 2.43
241 TestMultiNode/serial/StopMultiNode 25.1
242 TestMultiNode/serial/RestartMultiNode 151.04
243 TestMultiNode/serial/ValidateNameConflict 55.03
248 TestPreload 153.14
250 TestScheduledStopUnix 122.43
251 TestSkaffold 133.12
254 TestRunningBinaryUpgrade 220.07
256 TestKubernetesUpgrade 218.7
270 TestPause/serial/Start 109.16
279 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
280 TestNoKubernetes/serial/StartWithK8s 62.91
281 TestPause/serial/SecondStartNoReconfiguration 49.94
282 TestNoKubernetes/serial/StartWithStopK8s 33.28
283 TestPause/serial/Pause 0.65
284 TestPause/serial/VerifyStatus 0.26
285 TestPause/serial/Unpause 0.62
286 TestPause/serial/PauseAgain 0.81
287 TestPause/serial/DeletePaused 1.06
288 TestPause/serial/VerifyDeletedResources 14.37
289 TestStoppedBinaryUpgrade/Setup 0.47
290 TestStoppedBinaryUpgrade/Upgrade 185.3
291 TestNoKubernetes/serial/Start 51.19
292 TestNetworkPlugins/group/auto/Start 104.81
293 TestNoKubernetes/serial/VerifyK8sNotRunning 0.23
294 TestNoKubernetes/serial/ProfileList 19.09
295 TestNoKubernetes/serial/Stop 2.3
296 TestNoKubernetes/serial/StartNoArgs 32.64
297 TestNetworkPlugins/group/kindnet/Start 111.4
298 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.2
299 TestNetworkPlugins/group/calico/Start 139.95
300 TestNetworkPlugins/group/auto/KubeletFlags 0.23
301 TestNetworkPlugins/group/auto/NetCatPod 11.23
302 TestNetworkPlugins/group/auto/DNS 0.18
303 TestNetworkPlugins/group/auto/Localhost 0.14
304 TestNetworkPlugins/group/auto/HairPin 0.14
305 TestNetworkPlugins/group/custom-flannel/Start 111.7
306 TestStoppedBinaryUpgrade/MinikubeLogs 1.34
307 TestNetworkPlugins/group/false/Start 129.71
308 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
309 TestNetworkPlugins/group/kindnet/KubeletFlags 0.21
310 TestNetworkPlugins/group/kindnet/NetCatPod 11.28
311 TestNetworkPlugins/group/kindnet/DNS 0.2
312 TestNetworkPlugins/group/kindnet/Localhost 0.17
313 TestNetworkPlugins/group/kindnet/HairPin 0.15
314 TestNetworkPlugins/group/enable-default-cni/Start 201.37
315 TestNetworkPlugins/group/calico/ControllerPod 6.01
316 TestNetworkPlugins/group/calico/KubeletFlags 0.23
317 TestNetworkPlugins/group/calico/NetCatPod 13.25
318 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.34
319 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.33
320 TestNetworkPlugins/group/calico/DNS 0.24
321 TestNetworkPlugins/group/calico/Localhost 0.19
322 TestNetworkPlugins/group/calico/HairPin 0.19
323 TestNetworkPlugins/group/custom-flannel/DNS 0.21
324 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
325 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
326 TestNetworkPlugins/group/flannel/Start 79.44
327 TestNetworkPlugins/group/bridge/Start 128.06
328 TestNetworkPlugins/group/false/KubeletFlags 0.21
329 TestNetworkPlugins/group/false/NetCatPod 10.27
330 TestNetworkPlugins/group/false/DNS 0.23
331 TestNetworkPlugins/group/false/Localhost 0.28
332 TestNetworkPlugins/group/false/HairPin 0.17
333 TestNetworkPlugins/group/kubenet/Start 109.95
334 TestNetworkPlugins/group/flannel/ControllerPod 6.01
335 TestNetworkPlugins/group/flannel/KubeletFlags 0.23
336 TestNetworkPlugins/group/flannel/NetCatPod 12.28
337 TestNetworkPlugins/group/flannel/DNS 0.24
338 TestNetworkPlugins/group/flannel/Localhost 0.2
339 TestNetworkPlugins/group/flannel/HairPin 0.19
341 TestStartStop/group/old-k8s-version/serial/FirstStart 139.73
342 TestNetworkPlugins/group/bridge/KubeletFlags 0.26
343 TestNetworkPlugins/group/bridge/NetCatPod 12.4
344 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.21
345 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.26
346 TestNetworkPlugins/group/bridge/DNS 0.23
347 TestNetworkPlugins/group/bridge/Localhost 0.18
348 TestNetworkPlugins/group/bridge/HairPin 0.16
349 TestNetworkPlugins/group/enable-default-cni/DNS 0.21
350 TestNetworkPlugins/group/enable-default-cni/Localhost 0.2
351 TestNetworkPlugins/group/enable-default-cni/HairPin 0.19
353 TestStartStop/group/no-preload/serial/FirstStart 86.29
354 TestNetworkPlugins/group/kubenet/KubeletFlags 0.38
355 TestNetworkPlugins/group/kubenet/NetCatPod 12.88
357 TestStartStop/group/embed-certs/serial/FirstStart 123.25
358 TestNetworkPlugins/group/kubenet/DNS 0.2
359 TestNetworkPlugins/group/kubenet/Localhost 0.16
360 TestNetworkPlugins/group/kubenet/HairPin 0.16
362 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 136.32
363 TestStartStop/group/no-preload/serial/DeployApp 9.42
364 TestStartStop/group/old-k8s-version/serial/DeployApp 10.63
365 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.18
366 TestStartStop/group/no-preload/serial/Stop 14.41
367 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 3.1
368 TestStartStop/group/old-k8s-version/serial/Stop 13.45
369 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.22
370 TestStartStop/group/no-preload/serial/SecondStart 308.69
371 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.24
372 TestStartStop/group/old-k8s-version/serial/SecondStart 546.92
373 TestStartStop/group/embed-certs/serial/DeployApp 8.37
374 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.25
375 TestStartStop/group/embed-certs/serial/Stop 13.4
376 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.22
377 TestStartStop/group/embed-certs/serial/SecondStart 304.21
378 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.75
379 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.12
380 TestStartStop/group/default-k8s-diff-port/serial/Stop 13.36
381 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
382 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 316.94
383 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 10.01
384 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
385 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
386 TestStartStop/group/no-preload/serial/Pause 2.64
388 TestStartStop/group/newest-cni/serial/FirstStart 66.57
389 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
390 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
391 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.28
392 TestStartStop/group/embed-certs/serial/Pause 2.94
393 TestStartStop/group/newest-cni/serial/DeployApp 0
394 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.06
395 TestStartStop/group/newest-cni/serial/Stop 8.4
396 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 8.01
397 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
398 TestStartStop/group/newest-cni/serial/SecondStart 40.36
399 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.11
400 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.27
401 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.66
402 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
403 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
404 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.28
405 TestStartStop/group/newest-cni/serial/Pause 2.85
406 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
407 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
408 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.23
409 TestStartStop/group/old-k8s-version/serial/Pause 2.55
x
+
TestDownloadOnly/v1.20.0/json-events (10.56s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-174579 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-174579 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=kvm2 : (10.560603062s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (10.56s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0918 19:38:02.411198   14866 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I0918 19:38:02.411310   14866 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19667-7655/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-174579
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-174579: exit status 85 (59.753023ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-174579 | jenkins | v1.34.0 | 18 Sep 24 19:37 UTC |          |
	|         | -p download-only-174579        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/18 19:37:51
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0918 19:37:51.889016   14878 out.go:345] Setting OutFile to fd 1 ...
	I0918 19:37:51.889130   14878 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 19:37:51.889139   14878 out.go:358] Setting ErrFile to fd 2...
	I0918 19:37:51.889143   14878 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 19:37:51.889332   14878 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-7655/.minikube/bin
	W0918 19:37:51.889441   14878 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19667-7655/.minikube/config/config.json: open /home/jenkins/minikube-integration/19667-7655/.minikube/config/config.json: no such file or directory
	I0918 19:37:51.889987   14878 out.go:352] Setting JSON to true
	I0918 19:37:51.890903   14878 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":1220,"bootTime":1726687052,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0918 19:37:51.890999   14878 start.go:139] virtualization: kvm guest
	I0918 19:37:51.893525   14878 out.go:97] [download-only-174579] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0918 19:37:51.893661   14878 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19667-7655/.minikube/cache/preloaded-tarball: no such file or directory
	I0918 19:37:51.893710   14878 notify.go:220] Checking for updates...
	I0918 19:37:51.895109   14878 out.go:169] MINIKUBE_LOCATION=19667
	I0918 19:37:51.897408   14878 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 19:37:51.898956   14878 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19667-7655/kubeconfig
	I0918 19:37:51.900520   14878 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-7655/.minikube
	I0918 19:37:51.902068   14878 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0918 19:37:51.904603   14878 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0918 19:37:51.904863   14878 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 19:37:52.012220   14878 out.go:97] Using the kvm2 driver based on user configuration
	I0918 19:37:52.012250   14878 start.go:297] selected driver: kvm2
	I0918 19:37:52.012258   14878 start.go:901] validating driver "kvm2" against <nil>
	I0918 19:37:52.012595   14878 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 19:37:52.012745   14878 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19667-7655/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0918 19:37:52.028238   14878 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0918 19:37:52.028290   14878 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0918 19:37:52.028859   14878 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0918 19:37:52.029082   14878 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0918 19:37:52.029115   14878 cni.go:84] Creating CNI manager for ""
	I0918 19:37:52.029190   14878 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0918 19:37:52.029264   14878 start.go:340] cluster config:
	{Name:download-only-174579 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-174579 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 19:37:52.029493   14878 iso.go:125] acquiring lock: {Name:mk994b84cbc98dd0805f97e4539c3d8a9e02e7d2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 19:37:52.031599   14878 out.go:97] Downloading VM boot image ...
	I0918 19:37:52.031638   14878 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19667-7655/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0918 19:37:56.007694   14878 out.go:97] Starting "download-only-174579" primary control-plane node in "download-only-174579" cluster
	I0918 19:37:56.007715   14878 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0918 19:37:56.033295   14878 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0918 19:37:56.033378   14878 cache.go:56] Caching tarball of preloaded images
	I0918 19:37:56.033584   14878 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0918 19:37:56.035631   14878 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0918 19:37:56.035659   14878 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0918 19:37:56.058884   14878 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /home/jenkins/minikube-integration/19667-7655/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0918 19:38:00.725574   14878 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0918 19:38:00.725727   14878 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19667-7655/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0918 19:38:01.774346   14878 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0918 19:38:01.774754   14878 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/download-only-174579/config.json ...
	I0918 19:38:01.774795   14878 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/download-only-174579/config.json: {Name:mk366bbfcc023150a285ba28849d8b27d3ee813e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:38:01.774981   14878 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0918 19:38:01.775201   14878 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19667-7655/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-174579 host does not exist
	  To start a cluster, run: "minikube start -p download-only-174579"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-174579
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (3.81s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-479685 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-479685 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=kvm2 : (3.810470778s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (3.81s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I0918 19:38:06.547349   14866 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0918 19:38:06.547393   14866 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19667-7655/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-479685
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-479685: exit status 85 (62.608528ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-174579 | jenkins | v1.34.0 | 18 Sep 24 19:37 UTC |                     |
	|         | -p download-only-174579        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC | 18 Sep 24 19:38 UTC |
	| delete  | -p download-only-174579        | download-only-174579 | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC | 18 Sep 24 19:38 UTC |
	| start   | -o=json --download-only        | download-only-479685 | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC |                     |
	|         | -p download-only-479685        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/18 19:38:02
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0918 19:38:02.774201   15087 out.go:345] Setting OutFile to fd 1 ...
	I0918 19:38:02.774331   15087 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 19:38:02.774341   15087 out.go:358] Setting ErrFile to fd 2...
	I0918 19:38:02.774346   15087 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 19:38:02.774528   15087 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-7655/.minikube/bin
	I0918 19:38:02.775086   15087 out.go:352] Setting JSON to true
	I0918 19:38:02.775905   15087 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":1231,"bootTime":1726687052,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0918 19:38:02.776000   15087 start.go:139] virtualization: kvm guest
	I0918 19:38:02.778407   15087 out.go:97] [download-only-479685] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0918 19:38:02.778598   15087 notify.go:220] Checking for updates...
	I0918 19:38:02.780192   15087 out.go:169] MINIKUBE_LOCATION=19667
	I0918 19:38:02.781730   15087 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 19:38:02.783535   15087 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19667-7655/kubeconfig
	I0918 19:38:02.785358   15087 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-7655/.minikube
	I0918 19:38:02.787071   15087 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-479685 host does not exist
	  To start a cluster, run: "minikube start -p download-only-479685"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-479685
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.6s)

                                                
                                                
=== RUN   TestBinaryMirror
I0918 19:38:07.127936   14866 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-429071 --alsologtostderr --binary-mirror http://127.0.0.1:43717 --driver=kvm2 
helpers_test.go:175: Cleaning up "binary-mirror-429071" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-429071
--- PASS: TestBinaryMirror (0.60s)

                                                
                                    
x
+
TestOffline (101.86s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-230690 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-230690 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 : (1m40.832263433s)
helpers_test.go:175: Cleaning up "offline-docker-230690" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-230690
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-230690: (1.029118526s)
--- PASS: TestOffline (101.86s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-656419
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-656419: exit status 85 (53.197738ms)

                                                
                                                
-- stdout --
	* Profile "addons-656419" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-656419"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-656419
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-656419: exit status 85 (51.987307ms)

                                                
                                                
-- stdout --
	* Profile "addons-656419" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-656419"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (279.13s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-656419 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-656419 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (4m39.130184973s)
--- PASS: TestAddons/Setup (279.13s)

                                                
                                    
x
+
TestAddons/serial/Volcano (44.98s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:897: volcano-scheduler stabilized in 23.774919ms
addons_test.go:913: volcano-controller stabilized in 23.829394ms
addons_test.go:905: volcano-admission stabilized in 23.854574ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-k5bxx" [d1b865c4-de51-44e0-b2f9-45be36a7e6bb] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.004896499s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-2qszx" [760dd71d-fccb-488a-8be2-46d51b16bb59] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.00453988s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-k2qzr" [82824c23-79cf-488c-984a-3da0c4ff3731] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003558151s
addons_test.go:932: (dbg) Run:  kubectl --context addons-656419 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-656419 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-656419 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [a6a23135-68ec-468d-a357-02dad59962d7] Pending
helpers_test.go:344: "test-job-nginx-0" [a6a23135-68ec-468d-a357-02dad59962d7] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [a6a23135-68ec-468d-a357-02dad59962d7] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 18.00482531s
addons_test.go:968: (dbg) Run:  out/minikube-linux-amd64 -p addons-656419 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-linux-amd64 -p addons-656419 addons disable volcano --alsologtostderr -v=1: (10.535239097s)
--- PASS: TestAddons/serial/Volcano (44.98s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-656419 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-656419 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (21.55s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-656419 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-656419 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-656419 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [6d17ce62-e24b-4e3c-ab01-ccfcf3fbaff6] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [6d17ce62-e24b-4e3c-ab01-ccfcf3fbaff6] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.004122122s
I0918 19:52:25.579339   14866 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-656419 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-656419 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-656419 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.154
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-656419 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-656419 addons disable ingress-dns --alsologtostderr -v=1: (1.388765218s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-656419 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-656419 addons disable ingress --alsologtostderr -v=1: (7.78421809s)
--- PASS: TestAddons/parallel/Ingress (21.55s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.81s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-hwhdt" [0e4e1b30-01bb-497a-be67-8382b51b9fd0] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.005347215s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-656419
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-656419: (5.804223289s)
--- PASS: TestAddons/parallel/InspektorGadget (10.81s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.81s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.532817ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-9748w" [af8930c7-4faf-4d6d-b064-02a9a6ae3479] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004803762s
addons_test.go:417: (dbg) Run:  kubectl --context addons-656419 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-656419 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.81s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.41s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 5.137934ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-9vcgn" [3f9da820-37d4-44d6-a61e-f66ccc7d7588] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.003630594s
addons_test.go:475: (dbg) Run:  kubectl --context addons-656419 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-656419 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.836649525s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-656419 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.41s)

                                                
                                    
x
+
TestAddons/parallel/CSI (57.77s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0918 19:52:01.484229   14866 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0918 19:52:01.489058   14866 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0918 19:52:01.489087   14866 kapi.go:107] duration metric: took 4.912201ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:567: csi-hostpath-driver pods stabilized in 4.922599ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-656419 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-656419 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-656419 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-656419 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-656419 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-656419 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-656419 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-656419 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-656419 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-656419 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-656419 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-656419 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-656419 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-656419 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-656419 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-656419 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-656419 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-656419 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-656419 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-656419 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-656419 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [3c10e81e-5819-4e01-a92d-3752b755f27c] Pending
helpers_test.go:344: "task-pv-pod" [3c10e81e-5819-4e01-a92d-3752b755f27c] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [3c10e81e-5819-4e01-a92d-3752b755f27c] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.003963721s
addons_test.go:590: (dbg) Run:  kubectl --context addons-656419 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-656419 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-656419 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-656419 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-656419 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-656419 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-656419 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-656419 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-656419 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-656419 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-656419 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-656419 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-656419 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-656419 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-656419 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-656419 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-656419 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-656419 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-656419 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-656419 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-656419 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [a38a6200-d8dc-4baf-8a82-8c2f0e319417] Pending
helpers_test.go:344: "task-pv-pod-restore" [a38a6200-d8dc-4baf-8a82-8c2f0e319417] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [a38a6200-d8dc-4baf-8a82-8c2f0e319417] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.006665332s
addons_test.go:632: (dbg) Run:  kubectl --context addons-656419 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Done: kubectl --context addons-656419 delete pod task-pv-pod-restore: (1.06624621s)
addons_test.go:636: (dbg) Run:  kubectl --context addons-656419 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-656419 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-656419 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p addons-656419 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.746116108s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p addons-656419 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (57.77s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (20.06s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-656419 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-g4zlx" [1d18dd9c-2b48-4eac-aef7-82f6d20ddd3a] Pending
helpers_test.go:344: "headlamp-7b5c95b59d-g4zlx" [1d18dd9c-2b48-4eac-aef7-82f6d20ddd3a] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-g4zlx" [1d18dd9c-2b48-4eac-aef7-82f6d20ddd3a] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.021153592s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-656419 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 -p addons-656419 addons disable headlamp --alsologtostderr -v=1: (6.050124772s)
--- PASS: TestAddons/parallel/Headlamp (20.06s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.49s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-llgtb" [2f4466d7-b053-4c52-8ad8-8e0d9d0cdedb] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004589701s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-656419
--- PASS: TestAddons/parallel/CloudSpanner (6.49s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (56.54s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-656419 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-656419 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-656419 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-656419 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-656419 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-656419 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-656419 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-656419 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-656419 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-656419 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [2b47c144-0702-4849-96f5-f98faeef814f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [2b47c144-0702-4849-96f5-f98faeef814f] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [2b47c144-0702-4849-96f5-f98faeef814f] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.004114685s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-656419 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-amd64 -p addons-656419 ssh "cat /opt/local-path-provisioner/pvc-fe948a04-a782-481a-a290-a6b15945d18d_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-656419 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-656419 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p addons-656419 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-amd64 -p addons-656419 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.685010145s)
--- PASS: TestAddons/parallel/LocalPath (56.54s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.66s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-mpg2d" [6a45797b-d7b4-423b-93f3-b1a9a57eba5f] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004137062s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-656419
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.66s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.69s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-lqzv2" [12ddb544-b5f6-4f4d-8b1b-7e00028cbc6e] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004593595s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-656419 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-656419 addons disable yakd --alsologtostderr -v=1: (5.680349562s)
--- PASS: TestAddons/parallel/Yakd (10.69s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (8.6s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-656419
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-656419: (8.315507029s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-656419
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-656419
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-656419
--- PASS: TestAddons/StoppedEnableDisable (8.60s)

                                                
                                    
x
+
TestCertOptions (63.24s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-538744 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-538744 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 : (1m1.44925456s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-538744 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-538744 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-538744 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-538744" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-538744
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-538744: (1.235015763s)
--- PASS: TestCertOptions (63.24s)

                                                
                                    
x
+
TestCertExpiration (335.13s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-862796 --memory=2048 --cert-expiration=3m --driver=kvm2 
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-862796 --memory=2048 --cert-expiration=3m --driver=kvm2 : (1m41.870173736s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-862796 --memory=2048 --cert-expiration=8760h --driver=kvm2 
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-862796 --memory=2048 --cert-expiration=8760h --driver=kvm2 : (52.166388801s)
helpers_test.go:175: Cleaning up "cert-expiration-862796" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-862796
E0918 20:44:16.477121   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/gvisor-909102/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-862796: (1.093608343s)
--- PASS: TestCertExpiration (335.13s)

                                                
                                    
x
+
TestDockerFlags (60.93s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-867354 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-867354 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 : (59.248865677s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-867354 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-867354 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-867354" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-867354
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-867354: (1.17177363s)
--- PASS: TestDockerFlags (60.93s)

                                                
                                    
x
+
TestForceSystemdFlag (85.45s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-356771 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-356771 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 : (1m23.707542754s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-356771 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-356771" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-356771
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-356771: (1.456033516s)
--- PASS: TestForceSystemdFlag (85.45s)

                                                
                                    
x
+
TestForceSystemdEnv (88.08s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-260851 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-260851 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 : (1m26.62185369s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-260851 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-260851" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-260851
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-260851: (1.166018229s)
--- PASS: TestForceSystemdEnv (88.08s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (5.5s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0918 20:38:25.346789   14866 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0918 20:38:25.346939   14866 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0918 20:38:25.402576   14866 install.go:62] docker-machine-driver-kvm2: exit status 1
W0918 20:38:25.402998   14866 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0918 20:38:25.403097   14866 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3871685949/001/docker-machine-driver-kvm2
I0918 20:38:25.840152   14866 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate3871685949/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x4668640 0x4668640 0x4668640 0x4668640 0x4668640 0x4668640 0x4668640] Decompressors:map[bz2:0xc00089c150 gz:0xc00089c158 tar:0xc00089c0a0 tar.bz2:0xc00089c0b0 tar.gz:0xc00089c100 tar.xz:0xc00089c130 tar.zst:0xc00089c140 tbz2:0xc00089c0b0 tgz:0xc00089c100 txz:0xc00089c130 tzst:0xc00089c140 xz:0xc00089c160 zip:0xc00089c170 zst:0xc00089c168] Getters:map[file:0xc00228c350 http:0xc0018320f0 https:0xc001832140] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0918 20:38:25.840213   14866 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3871685949/001/docker-machine-driver-kvm2
I0918 20:38:28.258010   14866 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0918 20:38:28.258170   14866 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0918 20:38:28.311175   14866 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0918 20:38:28.311209   14866 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0918 20:38:28.311262   14866 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0918 20:38:28.311293   14866 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3871685949/002/docker-machine-driver-kvm2
I0918 20:38:28.663517   14866 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate3871685949/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x4668640 0x4668640 0x4668640 0x4668640 0x4668640 0x4668640 0x4668640] Decompressors:map[bz2:0xc00089c150 gz:0xc00089c158 tar:0xc00089c0a0 tar.bz2:0xc00089c0b0 tar.gz:0xc00089c100 tar.xz:0xc00089c130 tar.zst:0xc00089c140 tbz2:0xc00089c0b0 tgz:0xc00089c100 txz:0xc00089c130 tzst:0xc00089c140 xz:0xc00089c160 zip:0xc00089c170 zst:0xc00089c168] Getters:map[file:0xc00228da00 http:0xc0018334f0 https:0xc001833540] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0918 20:38:28.663562   14866 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3871685949/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (5.50s)

                                                
                                    
x
+
TestErrorSpam/setup (49.23s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-407030 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-407030 --driver=kvm2 
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-407030 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-407030 --driver=kvm2 : (49.228775303s)
--- PASS: TestErrorSpam/setup (49.23s)

                                                
                                    
x
+
TestErrorSpam/start (0.35s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-407030 --log_dir /tmp/nospam-407030 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-407030 --log_dir /tmp/nospam-407030 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-407030 --log_dir /tmp/nospam-407030 start --dry-run
--- PASS: TestErrorSpam/start (0.35s)

                                                
                                    
x
+
TestErrorSpam/status (0.75s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-407030 --log_dir /tmp/nospam-407030 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-407030 --log_dir /tmp/nospam-407030 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-407030 --log_dir /tmp/nospam-407030 status
--- PASS: TestErrorSpam/status (0.75s)

                                                
                                    
x
+
TestErrorSpam/pause (1.26s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-407030 --log_dir /tmp/nospam-407030 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-407030 --log_dir /tmp/nospam-407030 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-407030 --log_dir /tmp/nospam-407030 pause
--- PASS: TestErrorSpam/pause (1.26s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.47s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-407030 --log_dir /tmp/nospam-407030 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-407030 --log_dir /tmp/nospam-407030 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-407030 --log_dir /tmp/nospam-407030 unpause
--- PASS: TestErrorSpam/unpause (1.47s)

                                                
                                    
x
+
TestErrorSpam/stop (16.29s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-407030 --log_dir /tmp/nospam-407030 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-407030 --log_dir /tmp/nospam-407030 stop: (12.472902927s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-407030 --log_dir /tmp/nospam-407030 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-407030 --log_dir /tmp/nospam-407030 stop: (2.036070682s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-407030 --log_dir /tmp/nospam-407030 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-407030 --log_dir /tmp/nospam-407030 stop: (1.779173977s)
--- PASS: TestErrorSpam/stop (16.29s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19667-7655/.minikube/files/etc/test/nested/copy/14866/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (91.31s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-433731 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-433731 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 : (1m31.305522245s)
--- PASS: TestFunctional/serial/StartWithProxy (91.31s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (42.94s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0918 19:55:49.735032   14866 config.go:182] Loaded profile config "functional-433731": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-433731 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-433731 --alsologtostderr -v=8: (42.940546304s)
functional_test.go:663: soft start took 42.941267022s for "functional-433731" cluster.
I0918 19:56:32.676031   14866 config.go:182] Loaded profile config "functional-433731": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (42.94s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-433731 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-433731 cache add registry.k8s.io/pause:3.3: (1.26600707s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.34s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-433731 /tmp/TestFunctionalserialCacheCmdcacheadd_local73917301/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 cache add minikube-local-cache-test:functional-433731
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 cache delete minikube-local-cache-test:functional-433731
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-433731
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-433731 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (221.184828ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 kubectl -- --context functional-433731 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-433731 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.74s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-433731 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-433731 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.736697762s)
functional_test.go:761: restart took 41.736825077s for "functional-433731" cluster.
I0918 19:57:20.785451   14866 config.go:182] Loaded profile config "functional-433731": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (41.74s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-433731 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.16s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-433731 logs: (1.158889892s)
--- PASS: TestFunctional/serial/LogsCmd (1.16s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.04s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 logs --file /tmp/TestFunctionalserialLogsFileCmd3541351198/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-433731 logs --file /tmp/TestFunctionalserialLogsFileCmd3541351198/001/logs.txt: (1.035515109s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.04s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.85s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-433731 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-433731
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-433731: exit status 115 (281.537306ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.103:32713 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-433731 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-433731 delete -f testdata/invalidsvc.yaml: (1.350979456s)
--- PASS: TestFunctional/serial/InvalidService (4.85s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-433731 config get cpus: exit status 14 (52.530101ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-433731 config get cpus: exit status 14 (53.909889ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (12.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-433731 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-433731 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 25383: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (12.71s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-433731 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-433731 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (149.314053ms)

                                                
                                                
-- stdout --
	* [functional-433731] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19667
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19667-7655/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-7655/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 19:57:59.785650   25119 out.go:345] Setting OutFile to fd 1 ...
	I0918 19:57:59.785783   25119 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 19:57:59.785793   25119 out.go:358] Setting ErrFile to fd 2...
	I0918 19:57:59.785797   25119 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 19:57:59.785967   25119 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-7655/.minikube/bin
	I0918 19:57:59.786581   25119 out.go:352] Setting JSON to false
	I0918 19:57:59.787518   25119 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":2428,"bootTime":1726687052,"procs":231,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0918 19:57:59.787619   25119 start.go:139] virtualization: kvm guest
	I0918 19:57:59.790019   25119 out.go:177] * [functional-433731] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0918 19:57:59.792090   25119 notify.go:220] Checking for updates...
	I0918 19:57:59.792107   25119 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 19:57:59.794025   25119 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 19:57:59.795929   25119 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19667-7655/kubeconfig
	I0918 19:57:59.797760   25119 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-7655/.minikube
	I0918 19:57:59.799720   25119 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0918 19:57:59.801854   25119 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 19:57:59.803830   25119 config.go:182] Loaded profile config "functional-433731": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 19:57:59.804277   25119 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0918 19:57:59.804343   25119 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:57:59.819896   25119 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36675
	I0918 19:57:59.820387   25119 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:57:59.820950   25119 main.go:141] libmachine: Using API Version  1
	I0918 19:57:59.820975   25119 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:57:59.821450   25119 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:57:59.821667   25119 main.go:141] libmachine: (functional-433731) Calling .DriverName
	I0918 19:57:59.821954   25119 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 19:57:59.822373   25119 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0918 19:57:59.822411   25119 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:57:59.837909   25119 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42857
	I0918 19:57:59.838437   25119 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:57:59.838946   25119 main.go:141] libmachine: Using API Version  1
	I0918 19:57:59.838968   25119 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:57:59.839316   25119 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:57:59.839542   25119 main.go:141] libmachine: (functional-433731) Calling .DriverName
	I0918 19:57:59.878096   25119 out.go:177] * Using the kvm2 driver based on existing profile
	I0918 19:57:59.879731   25119 start.go:297] selected driver: kvm2
	I0918 19:57:59.879749   25119 start.go:901] validating driver "kvm2" against &{Name:functional-433731 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-433731 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.103 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 19:57:59.879860   25119 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 19:57:59.882821   25119 out.go:201] 
	W0918 19:57:59.884431   25119 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0918 19:57:59.886102   25119 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-433731 --dry-run --alsologtostderr -v=1 --driver=kvm2 
--- PASS: TestFunctional/parallel/DryRun (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-433731 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-433731 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (148.910456ms)

                                                
                                                
-- stdout --
	* [functional-433731] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19667
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19667-7655/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-7655/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 19:57:56.929148   24705 out.go:345] Setting OutFile to fd 1 ...
	I0918 19:57:56.929275   24705 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 19:57:56.929284   24705 out.go:358] Setting ErrFile to fd 2...
	I0918 19:57:56.929288   24705 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 19:57:56.929575   24705 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-7655/.minikube/bin
	I0918 19:57:56.930108   24705 out.go:352] Setting JSON to false
	I0918 19:57:56.930974   24705 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":2425,"bootTime":1726687052,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0918 19:57:56.931064   24705 start.go:139] virtualization: kvm guest
	I0918 19:57:56.933632   24705 out.go:177] * [functional-433731] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I0918 19:57:56.935677   24705 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 19:57:56.935689   24705 notify.go:220] Checking for updates...
	I0918 19:57:56.937543   24705 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 19:57:56.939449   24705 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19667-7655/kubeconfig
	I0918 19:57:56.941164   24705 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-7655/.minikube
	I0918 19:57:56.942965   24705 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0918 19:57:56.944588   24705 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 19:57:56.946653   24705 config.go:182] Loaded profile config "functional-433731": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 19:57:56.947401   24705 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0918 19:57:56.947460   24705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:57:56.963024   24705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33129
	I0918 19:57:56.963478   24705 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:57:56.964163   24705 main.go:141] libmachine: Using API Version  1
	I0918 19:57:56.964193   24705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:57:56.964649   24705 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:57:56.964816   24705 main.go:141] libmachine: (functional-433731) Calling .DriverName
	I0918 19:57:56.965110   24705 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 19:57:56.965530   24705 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0918 19:57:56.965605   24705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:57:56.981249   24705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41801
	I0918 19:57:56.981697   24705 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:57:56.982208   24705 main.go:141] libmachine: Using API Version  1
	I0918 19:57:56.982233   24705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:57:56.982597   24705 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:57:56.982822   24705 main.go:141] libmachine: (functional-433731) Calling .DriverName
	I0918 19:57:57.017779   24705 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0918 19:57:57.019572   24705 start.go:297] selected driver: kvm2
	I0918 19:57:57.019592   24705 start.go:901] validating driver "kvm2" against &{Name:functional-433731 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-433731 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.103 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 19:57:57.019689   24705 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 19:57:57.022305   24705 out.go:201] 
	W0918 19:57:57.023780   24705 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0918 19:57:57.025298   24705 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (25.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-433731 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-433731 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-bghb5" [f6eea060-fa44-42b9-9b70-52475d1eeb0f] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-bghb5" [f6eea060-fa44-42b9-9b70-52475d1eeb0f] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 25.007748749s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.103:32459
functional_test.go:1675: http://192.168.39.103:32459: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-bghb5

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.103:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.103:32459
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (25.61s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (50.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [319f247e-1aa2-435f-b31a-b062a18f815f] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004601885s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-433731 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-433731 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-433731 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-433731 get pvc myclaim -o=json
I0918 19:57:38.448648   14866 retry.go:31] will retry after 4.239638071s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:a5fc8d70-bf12-49fd-b4e4-2ffdc83acb62 ResourceVersion:742 Generation:0 CreationTimestamp:2024-09-18 19:57:35 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc0006fc2b0 VolumeMode:0xc0006fc2e0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-433731 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-433731 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [951ee816-7772-48cc-8082-acf12addd3c4] Pending
helpers_test.go:344: "sp-pod" [951ee816-7772-48cc-8082-acf12addd3c4] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0918 19:57:46.914822   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/addons-656419/client.crt: no such file or directory" logger="UnhandledError"
E0918 19:57:46.921368   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/addons-656419/client.crt: no such file or directory" logger="UnhandledError"
E0918 19:57:46.932865   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/addons-656419/client.crt: no such file or directory" logger="UnhandledError"
E0918 19:57:46.954383   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/addons-656419/client.crt: no such file or directory" logger="UnhandledError"
E0918 19:57:46.995873   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/addons-656419/client.crt: no such file or directory" logger="UnhandledError"
E0918 19:57:47.077411   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/addons-656419/client.crt: no such file or directory" logger="UnhandledError"
E0918 19:57:47.239061   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/addons-656419/client.crt: no such file or directory" logger="UnhandledError"
E0918 19:57:47.561229   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/addons-656419/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "sp-pod" [951ee816-7772-48cc-8082-acf12addd3c4] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 22.005582719s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-433731 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-433731 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-433731 delete -f testdata/storage-provisioner/pod.yaml: (1.707186656s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-433731 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [39d47750-6679-458c-957f-3c0006ba255f] Pending
helpers_test.go:344: "sp-pod" [39d47750-6679-458c-957f-3c0006ba255f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [39d47750-6679-458c-957f-3c0006ba255f] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.004283206s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-433731 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (50.90s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 ssh -n functional-433731 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 cp functional-433731:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4157208358/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 ssh -n functional-433731 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 ssh -n functional-433731 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (32.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-433731 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-2rbk9" [92bb670a-f2a7-4220-b2fd-684d9a353b85] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-2rbk9" [92bb670a-f2a7-4220-b2fd-684d9a353b85] Running
E0918 19:57:48.203413   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/addons-656419/client.crt: no such file or directory" logger="UnhandledError"
E0918 19:57:49.485643   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/addons-656419/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 25.007428009s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-433731 exec mysql-6cdb49bbb-2rbk9 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-433731 exec mysql-6cdb49bbb-2rbk9 -- mysql -ppassword -e "show databases;": exit status 1 (273.132971ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0918 19:57:53.962114   14866 retry.go:31] will retry after 1.227706641s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-433731 exec mysql-6cdb49bbb-2rbk9 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-433731 exec mysql-6cdb49bbb-2rbk9 -- mysql -ppassword -e "show databases;": exit status 1 (344.190164ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0918 19:57:55.534327   14866 retry.go:31] will retry after 2.1995589s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-433731 exec mysql-6cdb49bbb-2rbk9 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-433731 exec mysql-6cdb49bbb-2rbk9 -- mysql -ppassword -e "show databases;": exit status 1 (192.865092ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0918 19:57:57.927545   14866 retry.go:31] will retry after 2.604716166s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-433731 exec mysql-6cdb49bbb-2rbk9 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (32.25s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/14866/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 ssh "sudo cat /etc/test/nested/copy/14866/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/14866.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 ssh "sudo cat /etc/ssl/certs/14866.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/14866.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 ssh "sudo cat /usr/share/ca-certificates/14866.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/148662.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 ssh "sudo cat /etc/ssl/certs/148662.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/148662.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 ssh "sudo cat /usr/share/ca-certificates/148662.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-433731 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-433731 ssh "sudo systemctl is-active crio": exit status 1 (226.552162ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-433731 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-433731
docker.io/kicbase/echo-server:functional-433731
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-433731 image ls --format short --alsologtostderr:
I0918 19:58:01.927257   25340 out.go:345] Setting OutFile to fd 1 ...
I0918 19:58:01.927414   25340 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0918 19:58:01.927427   25340 out.go:358] Setting ErrFile to fd 2...
I0918 19:58:01.927433   25340 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0918 19:58:01.927676   25340 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-7655/.minikube/bin
I0918 19:58:01.928553   25340 config.go:182] Loaded profile config "functional-433731": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0918 19:58:01.928700   25340 config.go:182] Loaded profile config "functional-433731": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0918 19:58:01.929316   25340 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0918 19:58:01.929370   25340 main.go:141] libmachine: Launching plugin server for driver kvm2
I0918 19:58:01.948428   25340 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35747
I0918 19:58:01.949012   25340 main.go:141] libmachine: () Calling .GetVersion
I0918 19:58:01.949662   25340 main.go:141] libmachine: Using API Version  1
I0918 19:58:01.949698   25340 main.go:141] libmachine: () Calling .SetConfigRaw
I0918 19:58:01.950208   25340 main.go:141] libmachine: () Calling .GetMachineName
I0918 19:58:01.950469   25340 main.go:141] libmachine: (functional-433731) Calling .GetState
I0918 19:58:01.952658   25340 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0918 19:58:01.952698   25340 main.go:141] libmachine: Launching plugin server for driver kvm2
I0918 19:58:01.971136   25340 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46835
I0918 19:58:01.971608   25340 main.go:141] libmachine: () Calling .GetVersion
I0918 19:58:01.972219   25340 main.go:141] libmachine: Using API Version  1
I0918 19:58:01.972247   25340 main.go:141] libmachine: () Calling .SetConfigRaw
I0918 19:58:01.972651   25340 main.go:141] libmachine: () Calling .GetMachineName
I0918 19:58:01.972828   25340 main.go:141] libmachine: (functional-433731) Calling .DriverName
I0918 19:58:01.973093   25340 ssh_runner.go:195] Run: systemctl --version
I0918 19:58:01.973121   25340 main.go:141] libmachine: (functional-433731) Calling .GetSSHHostname
I0918 19:58:01.976218   25340 main.go:141] libmachine: (functional-433731) DBG | domain functional-433731 has defined MAC address 52:54:00:87:19:c5 in network mk-functional-433731
I0918 19:58:01.976809   25340 main.go:141] libmachine: (functional-433731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:19:c5", ip: ""} in network mk-functional-433731: {Iface:virbr1 ExpiryTime:2024-09-18 20:54:33 +0000 UTC Type:0 Mac:52:54:00:87:19:c5 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:functional-433731 Clientid:01:52:54:00:87:19:c5}
I0918 19:58:01.976841   25340 main.go:141] libmachine: (functional-433731) DBG | domain functional-433731 has defined IP address 192.168.39.103 and MAC address 52:54:00:87:19:c5 in network mk-functional-433731
I0918 19:58:01.977061   25340 main.go:141] libmachine: (functional-433731) Calling .GetSSHPort
I0918 19:58:01.977269   25340 main.go:141] libmachine: (functional-433731) Calling .GetSSHKeyPath
I0918 19:58:01.977405   25340 main.go:141] libmachine: (functional-433731) Calling .GetSSHUsername
I0918 19:58:01.977543   25340 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7655/.minikube/machines/functional-433731/id_rsa Username:docker}
I0918 19:58:02.060085   25340 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0918 19:58:02.091365   25340 main.go:141] libmachine: Making call to close driver server
I0918 19:58:02.091383   25340 main.go:141] libmachine: (functional-433731) Calling .Close
I0918 19:58:02.091807   25340 main.go:141] libmachine: (functional-433731) DBG | Closing plugin on server side
I0918 19:58:02.091869   25340 main.go:141] libmachine: Successfully made call to close driver server
I0918 19:58:02.091879   25340 main.go:141] libmachine: Making call to close connection to plugin binary
I0918 19:58:02.091894   25340 main.go:141] libmachine: Making call to close driver server
I0918 19:58:02.091905   25340 main.go:141] libmachine: (functional-433731) Calling .Close
I0918 19:58:02.092151   25340 main.go:141] libmachine: Successfully made call to close driver server
I0918 19:58:02.092169   25340 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-433731 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/minikube-local-cache-test | functional-433731 | 451810153e7e4 | 30B    |
| registry.k8s.io/kube-apiserver              | v1.31.1           | 6bab7719df100 | 94.2MB |
| registry.k8s.io/kube-scheduler              | v1.31.1           | 9aa1fad941575 | 67.4MB |
| registry.k8s.io/coredns/coredns             | v1.11.3           | c69fa2e9cbf5f | 61.8MB |
| registry.k8s.io/etcd                        | 3.5.15-0          | 2e96e5913fc06 | 148MB  |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| docker.io/library/nginx                     | latest            | 39286ab8a5e14 | 188MB  |
| docker.io/kicbase/echo-server               | functional-433731 | 9056ab77afb8e | 4.94MB |
| localhost/my-image                          | functional-433731 | 4db3ce90e750e | 1.24MB |
| registry.k8s.io/pause                       | 3.10              | 873ed75102791 | 736kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| registry.k8s.io/kube-controller-manager     | v1.31.1           | 175ffd71cce3d | 88.4MB |
| registry.k8s.io/kube-proxy                  | v1.31.1           | 60c005f310ff3 | 91.5MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-433731 image ls --format table --alsologtostderr:
I0918 19:58:06.453604   25741 out.go:345] Setting OutFile to fd 1 ...
I0918 19:58:06.453733   25741 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0918 19:58:06.453743   25741 out.go:358] Setting ErrFile to fd 2...
I0918 19:58:06.453748   25741 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0918 19:58:06.453972   25741 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-7655/.minikube/bin
I0918 19:58:06.454570   25741 config.go:182] Loaded profile config "functional-433731": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0918 19:58:06.454680   25741 config.go:182] Loaded profile config "functional-433731": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0918 19:58:06.455066   25741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0918 19:58:06.455107   25741 main.go:141] libmachine: Launching plugin server for driver kvm2
I0918 19:58:06.470378   25741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44385
I0918 19:58:06.470905   25741 main.go:141] libmachine: () Calling .GetVersion
I0918 19:58:06.471595   25741 main.go:141] libmachine: Using API Version  1
I0918 19:58:06.471622   25741 main.go:141] libmachine: () Calling .SetConfigRaw
I0918 19:58:06.471958   25741 main.go:141] libmachine: () Calling .GetMachineName
I0918 19:58:06.472128   25741 main.go:141] libmachine: (functional-433731) Calling .GetState
I0918 19:58:06.474049   25741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0918 19:58:06.474095   25741 main.go:141] libmachine: Launching plugin server for driver kvm2
I0918 19:58:06.489141   25741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43945
I0918 19:58:06.489694   25741 main.go:141] libmachine: () Calling .GetVersion
I0918 19:58:06.490328   25741 main.go:141] libmachine: Using API Version  1
I0918 19:58:06.490355   25741 main.go:141] libmachine: () Calling .SetConfigRaw
I0918 19:58:06.490814   25741 main.go:141] libmachine: () Calling .GetMachineName
I0918 19:58:06.491050   25741 main.go:141] libmachine: (functional-433731) Calling .DriverName
I0918 19:58:06.491255   25741 ssh_runner.go:195] Run: systemctl --version
I0918 19:58:06.491291   25741 main.go:141] libmachine: (functional-433731) Calling .GetSSHHostname
I0918 19:58:06.494631   25741 main.go:141] libmachine: (functional-433731) DBG | domain functional-433731 has defined MAC address 52:54:00:87:19:c5 in network mk-functional-433731
I0918 19:58:06.495103   25741 main.go:141] libmachine: (functional-433731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:19:c5", ip: ""} in network mk-functional-433731: {Iface:virbr1 ExpiryTime:2024-09-18 20:54:33 +0000 UTC Type:0 Mac:52:54:00:87:19:c5 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:functional-433731 Clientid:01:52:54:00:87:19:c5}
I0918 19:58:06.495132   25741 main.go:141] libmachine: (functional-433731) DBG | domain functional-433731 has defined IP address 192.168.39.103 and MAC address 52:54:00:87:19:c5 in network mk-functional-433731
I0918 19:58:06.495397   25741 main.go:141] libmachine: (functional-433731) Calling .GetSSHPort
I0918 19:58:06.495594   25741 main.go:141] libmachine: (functional-433731) Calling .GetSSHKeyPath
I0918 19:58:06.495758   25741 main.go:141] libmachine: (functional-433731) Calling .GetSSHUsername
I0918 19:58:06.495912   25741 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7655/.minikube/machines/functional-433731/id_rsa Username:docker}
I0918 19:58:06.588472   25741 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0918 19:58:06.632294   25741 main.go:141] libmachine: Making call to close driver server
I0918 19:58:06.632316   25741 main.go:141] libmachine: (functional-433731) Calling .Close
I0918 19:58:06.632522   25741 main.go:141] libmachine: Successfully made call to close driver server
I0918 19:58:06.632544   25741 main.go:141] libmachine: Making call to close connection to plugin binary
I0918 19:58:06.632552   25741 main.go:141] libmachine: Making call to close driver server
I0918 19:58:06.632560   25741 main.go:141] libmachine: (functional-433731) Calling .Close
I0918 19:58:06.632750   25741 main.go:141] libmachine: Successfully made call to close driver server
I0918 19:58:06.632762   25741 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-433731 image ls --format json --alsologtostderr:
[{"id":"451810153e7e44070583a70c6b788ba471308996fd42ce42b361df30a9dff0ca","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-433731"],"size":"30"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"91500000"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"736000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-433731"],"size":"4940000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133
eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"4db3ce90e750e6d67d772841b3cd64398be5f4536ba5d865aa9156b0da205353","repoDigests":[],"repoTags":["localhost/my-image:functional-433731"],"size":"1240000"},{"id":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"94200000"},{"id":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"88400000"},{"id":"39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","repoDigests":[],"repoTags":["registry.k8
s.io/kube-scheduler:v1.31.1"],"size":"67400000"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"61800000"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"148000000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-433731 image ls --format json --alsologtostderr:
I0918 19:58:06.231456   25701 out.go:345] Setting OutFile to fd 1 ...
I0918 19:58:06.231560   25701 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0918 19:58:06.231566   25701 out.go:358] Setting ErrFile to fd 2...
I0918 19:58:06.231572   25701 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0918 19:58:06.231801   25701 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-7655/.minikube/bin
I0918 19:58:06.232533   25701 config.go:182] Loaded profile config "functional-433731": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0918 19:58:06.232684   25701 config.go:182] Loaded profile config "functional-433731": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0918 19:58:06.233170   25701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0918 19:58:06.233217   25701 main.go:141] libmachine: Launching plugin server for driver kvm2
I0918 19:58:06.252551   25701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44875
I0918 19:58:06.253081   25701 main.go:141] libmachine: () Calling .GetVersion
I0918 19:58:06.253706   25701 main.go:141] libmachine: Using API Version  1
I0918 19:58:06.253729   25701 main.go:141] libmachine: () Calling .SetConfigRaw
I0918 19:58:06.254076   25701 main.go:141] libmachine: () Calling .GetMachineName
I0918 19:58:06.254253   25701 main.go:141] libmachine: (functional-433731) Calling .GetState
I0918 19:58:06.256284   25701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0918 19:58:06.256339   25701 main.go:141] libmachine: Launching plugin server for driver kvm2
I0918 19:58:06.271851   25701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46411
I0918 19:58:06.272355   25701 main.go:141] libmachine: () Calling .GetVersion
I0918 19:58:06.273240   25701 main.go:141] libmachine: Using API Version  1
I0918 19:58:06.273265   25701 main.go:141] libmachine: () Calling .SetConfigRaw
I0918 19:58:06.273615   25701 main.go:141] libmachine: () Calling .GetMachineName
I0918 19:58:06.273831   25701 main.go:141] libmachine: (functional-433731) Calling .DriverName
I0918 19:58:06.274054   25701 ssh_runner.go:195] Run: systemctl --version
I0918 19:58:06.274084   25701 main.go:141] libmachine: (functional-433731) Calling .GetSSHHostname
I0918 19:58:06.276791   25701 main.go:141] libmachine: (functional-433731) DBG | domain functional-433731 has defined MAC address 52:54:00:87:19:c5 in network mk-functional-433731
I0918 19:58:06.277209   25701 main.go:141] libmachine: (functional-433731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:19:c5", ip: ""} in network mk-functional-433731: {Iface:virbr1 ExpiryTime:2024-09-18 20:54:33 +0000 UTC Type:0 Mac:52:54:00:87:19:c5 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:functional-433731 Clientid:01:52:54:00:87:19:c5}
I0918 19:58:06.277242   25701 main.go:141] libmachine: (functional-433731) DBG | domain functional-433731 has defined IP address 192.168.39.103 and MAC address 52:54:00:87:19:c5 in network mk-functional-433731
I0918 19:58:06.277412   25701 main.go:141] libmachine: (functional-433731) Calling .GetSSHPort
I0918 19:58:06.277556   25701 main.go:141] libmachine: (functional-433731) Calling .GetSSHKeyPath
I0918 19:58:06.277707   25701 main.go:141] libmachine: (functional-433731) Calling .GetSSHUsername
I0918 19:58:06.277831   25701 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7655/.minikube/machines/functional-433731/id_rsa Username:docker}
I0918 19:58:06.364095   25701 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0918 19:58:06.401817   25701 main.go:141] libmachine: Making call to close driver server
I0918 19:58:06.401835   25701 main.go:141] libmachine: (functional-433731) Calling .Close
I0918 19:58:06.402104   25701 main.go:141] libmachine: Successfully made call to close driver server
I0918 19:58:06.402131   25701 main.go:141] libmachine: Making call to close connection to plugin binary
I0918 19:58:06.402139   25701 main.go:141] libmachine: Making call to close driver server
I0918 19:58:06.402147   25701 main.go:141] libmachine: (functional-433731) Calling .Close
I0918 19:58:06.402146   25701 main.go:141] libmachine: (functional-433731) DBG | Closing plugin on server side
I0918 19:58:06.402374   25701 main.go:141] libmachine: (functional-433731) DBG | Closing plugin on server side
I0918 19:58:06.402410   25701 main.go:141] libmachine: Successfully made call to close driver server
I0918 19:58:06.402422   25701 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-433731 image ls --format yaml --alsologtostderr:
- id: 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "91500000"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "148000000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "88400000"
- id: 39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61800000"
- id: 451810153e7e44070583a70c6b788ba471308996fd42ce42b361df30a9dff0ca
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-433731
size: "30"
- id: 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "94200000"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "736000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "67400000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-433731
size: "4940000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-433731 image ls --format yaml --alsologtostderr:
I0918 19:58:02.150396   25364 out.go:345] Setting OutFile to fd 1 ...
I0918 19:58:02.150637   25364 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0918 19:58:02.150683   25364 out.go:358] Setting ErrFile to fd 2...
I0918 19:58:02.150695   25364 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0918 19:58:02.151368   25364 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-7655/.minikube/bin
I0918 19:58:02.152273   25364 config.go:182] Loaded profile config "functional-433731": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0918 19:58:02.152444   25364 config.go:182] Loaded profile config "functional-433731": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0918 19:58:02.153041   25364 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0918 19:58:02.153094   25364 main.go:141] libmachine: Launching plugin server for driver kvm2
I0918 19:58:02.170216   25364 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35995
I0918 19:58:02.170724   25364 main.go:141] libmachine: () Calling .GetVersion
I0918 19:58:02.171619   25364 main.go:141] libmachine: Using API Version  1
I0918 19:58:02.171651   25364 main.go:141] libmachine: () Calling .SetConfigRaw
I0918 19:58:02.172591   25364 main.go:141] libmachine: () Calling .GetMachineName
I0918 19:58:02.172774   25364 main.go:141] libmachine: (functional-433731) Calling .GetState
I0918 19:58:02.175459   25364 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0918 19:58:02.175511   25364 main.go:141] libmachine: Launching plugin server for driver kvm2
I0918 19:58:02.192494   25364 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40717
I0918 19:58:02.193002   25364 main.go:141] libmachine: () Calling .GetVersion
I0918 19:58:02.193456   25364 main.go:141] libmachine: Using API Version  1
I0918 19:58:02.193477   25364 main.go:141] libmachine: () Calling .SetConfigRaw
I0918 19:58:02.193819   25364 main.go:141] libmachine: () Calling .GetMachineName
I0918 19:58:02.194022   25364 main.go:141] libmachine: (functional-433731) Calling .DriverName
I0918 19:58:02.194286   25364 ssh_runner.go:195] Run: systemctl --version
I0918 19:58:02.194333   25364 main.go:141] libmachine: (functional-433731) Calling .GetSSHHostname
I0918 19:58:02.197328   25364 main.go:141] libmachine: (functional-433731) DBG | domain functional-433731 has defined MAC address 52:54:00:87:19:c5 in network mk-functional-433731
I0918 19:58:02.197756   25364 main.go:141] libmachine: (functional-433731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:19:c5", ip: ""} in network mk-functional-433731: {Iface:virbr1 ExpiryTime:2024-09-18 20:54:33 +0000 UTC Type:0 Mac:52:54:00:87:19:c5 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:functional-433731 Clientid:01:52:54:00:87:19:c5}
I0918 19:58:02.197782   25364 main.go:141] libmachine: (functional-433731) DBG | domain functional-433731 has defined IP address 192.168.39.103 and MAC address 52:54:00:87:19:c5 in network mk-functional-433731
I0918 19:58:02.197939   25364 main.go:141] libmachine: (functional-433731) Calling .GetSSHPort
I0918 19:58:02.198098   25364 main.go:141] libmachine: (functional-433731) Calling .GetSSHKeyPath
I0918 19:58:02.198215   25364 main.go:141] libmachine: (functional-433731) Calling .GetSSHUsername
I0918 19:58:02.198368   25364 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7655/.minikube/machines/functional-433731/id_rsa Username:docker}
I0918 19:58:02.298737   25364 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0918 19:58:02.343408   25364 main.go:141] libmachine: Making call to close driver server
I0918 19:58:02.343425   25364 main.go:141] libmachine: (functional-433731) Calling .Close
I0918 19:58:02.343699   25364 main.go:141] libmachine: Successfully made call to close driver server
I0918 19:58:02.343721   25364 main.go:141] libmachine: Making call to close connection to plugin binary
I0918 19:58:02.343736   25364 main.go:141] libmachine: Making call to close driver server
I0918 19:58:02.343740   25364 main.go:141] libmachine: (functional-433731) DBG | Closing plugin on server side
I0918 19:58:02.343744   25364 main.go:141] libmachine: (functional-433731) Calling .Close
I0918 19:58:02.344046   25364 main.go:141] libmachine: Successfully made call to close driver server
I0918 19:58:02.344069   25364 main.go:141] libmachine: Making call to close connection to plugin binary
I0918 19:58:02.344069   25364 main.go:141] libmachine: (functional-433731) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-433731 ssh pgrep buildkitd: exit status 1 (205.364279ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 image build -t localhost/my-image:functional-433731 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-433731 image build -t localhost/my-image:functional-433731 testdata/build --alsologtostderr: (3.388292235s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-433731 image build -t localhost/my-image:functional-433731 testdata/build --alsologtostderr:
I0918 19:58:02.606576   25427 out.go:345] Setting OutFile to fd 1 ...
I0918 19:58:02.606808   25427 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0918 19:58:02.606820   25427 out.go:358] Setting ErrFile to fd 2...
I0918 19:58:02.606825   25427 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0918 19:58:02.607087   25427 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-7655/.minikube/bin
I0918 19:58:02.608062   25427 config.go:182] Loaded profile config "functional-433731": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0918 19:58:02.608948   25427 config.go:182] Loaded profile config "functional-433731": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0918 19:58:02.609517   25427 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0918 19:58:02.609634   25427 main.go:141] libmachine: Launching plugin server for driver kvm2
I0918 19:58:02.632118   25427 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33709
I0918 19:58:02.633018   25427 main.go:141] libmachine: () Calling .GetVersion
I0918 19:58:02.633749   25427 main.go:141] libmachine: Using API Version  1
I0918 19:58:02.633770   25427 main.go:141] libmachine: () Calling .SetConfigRaw
I0918 19:58:02.634266   25427 main.go:141] libmachine: () Calling .GetMachineName
I0918 19:58:02.634534   25427 main.go:141] libmachine: (functional-433731) Calling .GetState
I0918 19:58:02.636647   25427 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0918 19:58:02.636699   25427 main.go:141] libmachine: Launching plugin server for driver kvm2
I0918 19:58:02.658864   25427 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38889
I0918 19:58:02.659723   25427 main.go:141] libmachine: () Calling .GetVersion
I0918 19:58:02.660348   25427 main.go:141] libmachine: Using API Version  1
I0918 19:58:02.660364   25427 main.go:141] libmachine: () Calling .SetConfigRaw
I0918 19:58:02.660736   25427 main.go:141] libmachine: () Calling .GetMachineName
I0918 19:58:02.660960   25427 main.go:141] libmachine: (functional-433731) Calling .DriverName
I0918 19:58:02.661211   25427 ssh_runner.go:195] Run: systemctl --version
I0918 19:58:02.661236   25427 main.go:141] libmachine: (functional-433731) Calling .GetSSHHostname
I0918 19:58:02.664254   25427 main.go:141] libmachine: (functional-433731) DBG | domain functional-433731 has defined MAC address 52:54:00:87:19:c5 in network mk-functional-433731
I0918 19:58:02.664895   25427 main.go:141] libmachine: (functional-433731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:19:c5", ip: ""} in network mk-functional-433731: {Iface:virbr1 ExpiryTime:2024-09-18 20:54:33 +0000 UTC Type:0 Mac:52:54:00:87:19:c5 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:functional-433731 Clientid:01:52:54:00:87:19:c5}
I0918 19:58:02.664945   25427 main.go:141] libmachine: (functional-433731) DBG | domain functional-433731 has defined IP address 192.168.39.103 and MAC address 52:54:00:87:19:c5 in network mk-functional-433731
I0918 19:58:02.665134   25427 main.go:141] libmachine: (functional-433731) Calling .GetSSHPort
I0918 19:58:02.665343   25427 main.go:141] libmachine: (functional-433731) Calling .GetSSHKeyPath
I0918 19:58:02.665552   25427 main.go:141] libmachine: (functional-433731) Calling .GetSSHUsername
I0918 19:58:02.665723   25427 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7655/.minikube/machines/functional-433731/id_rsa Username:docker}
I0918 19:58:02.781938   25427 build_images.go:161] Building image from path: /tmp/build.2142190540.tar
I0918 19:58:02.782021   25427 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0918 19:58:02.797089   25427 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2142190540.tar
I0918 19:58:02.805153   25427 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2142190540.tar: stat -c "%s %y" /var/lib/minikube/build/build.2142190540.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2142190540.tar': No such file or directory
I0918 19:58:02.805234   25427 ssh_runner.go:362] scp /tmp/build.2142190540.tar --> /var/lib/minikube/build/build.2142190540.tar (3072 bytes)
I0918 19:58:02.846605   25427 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2142190540
I0918 19:58:02.868872   25427 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2142190540 -xf /var/lib/minikube/build/build.2142190540.tar
I0918 19:58:02.892643   25427 docker.go:360] Building image: /var/lib/minikube/build/build.2142190540
I0918 19:58:02.892732   25427 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-433731 /var/lib/minikube/build/build.2142190540
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.0s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.3s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.6s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.1s done
#8 writing image sha256:4db3ce90e750e6d67d772841b3cd64398be5f4536ba5d865aa9156b0da205353 done
#8 naming to localhost/my-image:functional-433731
#8 naming to localhost/my-image:functional-433731 0.0s done
#8 DONE 0.1s
I0918 19:58:05.902751   25427 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-433731 /var/lib/minikube/build/build.2142190540: (3.009989286s)
I0918 19:58:05.902877   25427 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2142190540
I0918 19:58:05.922394   25427 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2142190540.tar
I0918 19:58:05.938827   25427 build_images.go:217] Built localhost/my-image:functional-433731 from /tmp/build.2142190540.tar
I0918 19:58:05.938869   25427 build_images.go:133] succeeded building to: functional-433731
I0918 19:58:05.938876   25427 build_images.go:134] failed building to: 
I0918 19:58:05.938903   25427 main.go:141] libmachine: Making call to close driver server
I0918 19:58:05.938917   25427 main.go:141] libmachine: (functional-433731) Calling .Close
I0918 19:58:05.939248   25427 main.go:141] libmachine: Successfully made call to close driver server
I0918 19:58:05.939267   25427 main.go:141] libmachine: Making call to close connection to plugin binary
I0918 19:58:05.939276   25427 main.go:141] libmachine: Making call to close driver server
I0918 19:58:05.939284   25427 main.go:141] libmachine: (functional-433731) Calling .Close
I0918 19:58:05.939516   25427 main.go:141] libmachine: Successfully made call to close driver server
I0918 19:58:05.939589   25427 main.go:141] libmachine: (functional-433731) DBG | Closing plugin on server side
I0918 19:58:05.939632   25427 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.52602948s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-433731
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.55s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-433731 docker-env) && out/minikube-linux-amd64 status -p functional-433731"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-433731 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 image load --daemon kicbase/echo-server:functional-433731 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "322.829132ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "52.212897ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 image load --daemon kicbase/echo-server:functional-433731 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "318.247762ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "52.857428ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-433731
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 image load --daemon kicbase/echo-server:functional-433731 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 image save kicbase/echo-server:functional-433731 /home/jenkins/workspace/KVM_Linux_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 image rm kicbase/echo-server:functional-433731 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 image load /home/jenkins/workspace/KVM_Linux_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-433731
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 image save --daemon kicbase/echo-server:functional-433731 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-433731
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (21.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-433731 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-433731 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-r8v8d" [1ce57825-6ea0-4f55-884d-934fbe1ae0e9] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
I0918 19:57:35.709928   14866 retry.go:31] will retry after 2.679053241s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:a5fc8d70-bf12-49fd-b4e4-2ffdc83acb62 ResourceVersion:742 Generation:0 CreationTimestamp:2024-09-18 19:57:35 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc0017f80e0 VolumeMode:0xc0017f80f0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
helpers_test.go:344: "hello-node-6b9f76b5c7-r8v8d" [1ce57825-6ea0-4f55-884d-934fbe1ae0e9] Running
E0918 19:57:52.047855   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/addons-656419/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 21.012894106s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (21.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-433731 /tmp/TestFunctionalparallelMountCmdany-port323641154/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726689477030293036" to /tmp/TestFunctionalparallelMountCmdany-port323641154/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726689477030293036" to /tmp/TestFunctionalparallelMountCmdany-port323641154/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726689477030293036" to /tmp/TestFunctionalparallelMountCmdany-port323641154/001/test-1726689477030293036
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-433731 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (257.398207ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0918 19:57:57.288037   14866 retry.go:31] will retry after 658.332944ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 18 19:57 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 18 19:57 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 18 19:57 test-1726689477030293036
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 ssh cat /mount-9p/test-1726689477030293036
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-433731 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [bdce8b03-e9ef-4d73-997e-d4ad5fea91f8] Pending
helpers_test.go:344: "busybox-mount" [bdce8b03-e9ef-4d73-997e-d4ad5fea91f8] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [bdce8b03-e9ef-4d73-997e-d4ad5fea91f8] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [bdce8b03-e9ef-4d73-997e-d4ad5fea91f8] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003557099s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-433731 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-433731 /tmp/TestFunctionalparallelMountCmdany-port323641154/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.83s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 service list -o json
E0918 19:57:57.169863   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/addons-656419/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1494: Took "503.741767ms" to run "out/minikube-linux-amd64 -p functional-433731 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.103:32656
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.103:32656
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-433731 /tmp/TestFunctionalparallelMountCmdspecific-port3208911982/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-433731 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (269.55634ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0918 19:58:05.125242   14866 retry.go:31] will retry after 577.463511ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-433731 /tmp/TestFunctionalparallelMountCmdspecific-port3208911982/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-433731 ssh "sudo umount -f /mount-9p": exit status 1 (200.29738ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-433731 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-433731 /tmp/TestFunctionalparallelMountCmdspecific-port3208911982/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-433731 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1926385763/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-433731 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1926385763/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-433731 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1926385763/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-433731 ssh "findmnt -T" /mount1: exit status 1 (289.086691ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0918 19:58:07.147199   14866 retry.go:31] will retry after 337.215025ms: exit status 1
E0918 19:58:07.411945   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/addons-656419/client.crt: no such file or directory" logger="UnhandledError"
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-433731 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-433731 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-433731 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1926385763/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-433731 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1926385763/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-433731 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1926385763/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
2024/09/18 19:58:12 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.33s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-433731
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-433731
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-433731
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestGvisorAddon (324.84s)

                                                
                                                
=== RUN   TestGvisorAddon
=== PAUSE TestGvisorAddon

                                                
                                                

                                                
                                                
=== CONT  TestGvisorAddon
gvisor_addon_test.go:52: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-909102 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
E0918 20:37:28.681851   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/functional-433731/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:37:46.915088   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/addons-656419/client.crt: no such file or directory" logger="UnhandledError"
gvisor_addon_test.go:52: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-909102 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (1m41.452348007s)
gvisor_addon_test.go:58: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-909102 cache add gcr.io/k8s-minikube/gvisor-addon:2
gvisor_addon_test.go:58: (dbg) Done: out/minikube-linux-amd64 -p gvisor-909102 cache add gcr.io/k8s-minikube/gvisor-addon:2: (22.620938755s)
gvisor_addon_test.go:63: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-909102 addons enable gvisor
gvisor_addon_test.go:63: (dbg) Done: out/minikube-linux-amd64 -p gvisor-909102 addons enable gvisor: (3.688928709s)
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [cb57882c-97b5-4159-b444-f0eeb83315ff] Running
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 6.004434596s
gvisor_addon_test.go:73: (dbg) Run:  kubectl --context gvisor-909102 replace --force -f testdata/nginx-gvisor.yaml
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [ced15b60-f226-4ac7-8fea-fbcb943a09ba] Pending
helpers_test.go:344: "nginx-gvisor" [ced15b60-f226-4ac7-8fea-fbcb943a09ba] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-gvisor" [ced15b60-f226-4ac7-8fea-fbcb943a09ba] Running
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 40.005557736s
gvisor_addon_test.go:83: (dbg) Run:  out/minikube-linux-amd64 stop -p gvisor-909102
gvisor_addon_test.go:83: (dbg) Done: out/minikube-linux-amd64 stop -p gvisor-909102: (1m32.522445704s)
gvisor_addon_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-909102 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-909102 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (45.228963358s)
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [cb57882c-97b5-4159-b444-f0eeb83315ff] Running / Ready:ContainersNotReady (containers with unready status: [gvisor]) / ContainersReady:ContainersNotReady (containers with unready status: [gvisor])
helpers_test.go:344: "gvisor" [cb57882c-97b5-4159-b444-f0eeb83315ff] Running
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 6.00559724s
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [ced15b60-f226-4ac7-8fea-fbcb943a09ba] Running / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 6.004368089s
helpers_test.go:175: Cleaning up "gvisor-909102" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p gvisor-909102
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p gvisor-909102: (1.130921216s)
--- PASS: TestGvisorAddon (324.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (229.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-652424 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2 
E0918 19:58:27.893963   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/addons-656419/client.crt: no such file or directory" logger="UnhandledError"
E0918 19:59:08.855460   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/addons-656419/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:00:30.777171   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/addons-656419/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-652424 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2 : (3m48.566329279s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-652424 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (229.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-652424 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-652424 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-652424 -- rollout status deployment/busybox: (3.508825085s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-652424 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-652424 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-652424 -- exec busybox-7dff88458-4skwf -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-652424 -- exec busybox-7dff88458-5d22f -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-652424 -- exec busybox-7dff88458-s5bhs -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-652424 -- exec busybox-7dff88458-4skwf -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-652424 -- exec busybox-7dff88458-5d22f -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-652424 -- exec busybox-7dff88458-s5bhs -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-652424 -- exec busybox-7dff88458-4skwf -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-652424 -- exec busybox-7dff88458-5d22f -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-652424 -- exec busybox-7dff88458-s5bhs -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-652424 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-652424 -- exec busybox-7dff88458-4skwf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-652424 -- exec busybox-7dff88458-4skwf -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-652424 -- exec busybox-7dff88458-5d22f -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-652424 -- exec busybox-7dff88458-5d22f -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-652424 -- exec busybox-7dff88458-s5bhs -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-652424 -- exec busybox-7dff88458-s5bhs -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (72.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-652424 -v=7 --alsologtostderr
E0918 20:02:28.681610   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/functional-433731/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:02:28.688023   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/functional-433731/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:02:28.699461   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/functional-433731/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:02:28.721021   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/functional-433731/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:02:28.762489   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/functional-433731/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:02:28.843999   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/functional-433731/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:02:29.005571   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/functional-433731/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:02:29.327346   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/functional-433731/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:02:29.969504   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/functional-433731/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:02:31.251478   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/functional-433731/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:02:33.813117   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/functional-433731/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:02:38.934661   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/functional-433731/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:02:46.914687   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/addons-656419/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:02:49.177037   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/functional-433731/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:03:09.659150   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/functional-433731/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:03:14.619083   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/addons-656419/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-652424 -v=7 --alsologtostderr: (1m11.130939744s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-652424 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (72.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-652424 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (14.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-652424 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-652424 cp testdata/cp-test.txt ha-652424:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-652424 ssh -n ha-652424 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-652424 cp ha-652424:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1701722116/001/cp-test_ha-652424.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-652424 ssh -n ha-652424 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-652424 cp ha-652424:/home/docker/cp-test.txt ha-652424-m02:/home/docker/cp-test_ha-652424_ha-652424-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-652424 ssh -n ha-652424 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-652424 ssh -n ha-652424-m02 "sudo cat /home/docker/cp-test_ha-652424_ha-652424-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-652424 cp ha-652424:/home/docker/cp-test.txt ha-652424-m03:/home/docker/cp-test_ha-652424_ha-652424-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-652424 ssh -n ha-652424 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-652424 ssh -n ha-652424-m03 "sudo cat /home/docker/cp-test_ha-652424_ha-652424-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-652424 cp ha-652424:/home/docker/cp-test.txt ha-652424-m04:/home/docker/cp-test_ha-652424_ha-652424-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-652424 ssh -n ha-652424 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-652424 ssh -n ha-652424-m04 "sudo cat /home/docker/cp-test_ha-652424_ha-652424-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-652424 cp testdata/cp-test.txt ha-652424-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-652424 ssh -n ha-652424-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-652424 cp ha-652424-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1701722116/001/cp-test_ha-652424-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-652424 ssh -n ha-652424-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-652424 cp ha-652424-m02:/home/docker/cp-test.txt ha-652424:/home/docker/cp-test_ha-652424-m02_ha-652424.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-652424 ssh -n ha-652424-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-652424 ssh -n ha-652424 "sudo cat /home/docker/cp-test_ha-652424-m02_ha-652424.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-652424 cp ha-652424-m02:/home/docker/cp-test.txt ha-652424-m03:/home/docker/cp-test_ha-652424-m02_ha-652424-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-652424 ssh -n ha-652424-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-652424 ssh -n ha-652424-m03 "sudo cat /home/docker/cp-test_ha-652424-m02_ha-652424-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-652424 cp ha-652424-m02:/home/docker/cp-test.txt ha-652424-m04:/home/docker/cp-test_ha-652424-m02_ha-652424-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-652424 ssh -n ha-652424-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-652424 ssh -n ha-652424-m04 "sudo cat /home/docker/cp-test_ha-652424-m02_ha-652424-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-652424 cp testdata/cp-test.txt ha-652424-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-652424 ssh -n ha-652424-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-652424 cp ha-652424-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1701722116/001/cp-test_ha-652424-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-652424 ssh -n ha-652424-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-652424 cp ha-652424-m03:/home/docker/cp-test.txt ha-652424:/home/docker/cp-test_ha-652424-m03_ha-652424.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-652424 ssh -n ha-652424-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-652424 ssh -n ha-652424 "sudo cat /home/docker/cp-test_ha-652424-m03_ha-652424.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-652424 cp ha-652424-m03:/home/docker/cp-test.txt ha-652424-m02:/home/docker/cp-test_ha-652424-m03_ha-652424-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-652424 ssh -n ha-652424-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-652424 ssh -n ha-652424-m02 "sudo cat /home/docker/cp-test_ha-652424-m03_ha-652424-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-652424 cp ha-652424-m03:/home/docker/cp-test.txt ha-652424-m04:/home/docker/cp-test_ha-652424-m03_ha-652424-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-652424 ssh -n ha-652424-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-652424 ssh -n ha-652424-m04 "sudo cat /home/docker/cp-test_ha-652424-m03_ha-652424-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-652424 cp testdata/cp-test.txt ha-652424-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-652424 ssh -n ha-652424-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-652424 cp ha-652424-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1701722116/001/cp-test_ha-652424-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-652424 ssh -n ha-652424-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-652424 cp ha-652424-m04:/home/docker/cp-test.txt ha-652424:/home/docker/cp-test_ha-652424-m04_ha-652424.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-652424 ssh -n ha-652424-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-652424 ssh -n ha-652424 "sudo cat /home/docker/cp-test_ha-652424-m04_ha-652424.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-652424 cp ha-652424-m04:/home/docker/cp-test.txt ha-652424-m02:/home/docker/cp-test_ha-652424-m04_ha-652424-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-652424 ssh -n ha-652424-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-652424 ssh -n ha-652424-m02 "sudo cat /home/docker/cp-test_ha-652424-m04_ha-652424-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-652424 cp ha-652424-m04:/home/docker/cp-test.txt ha-652424-m03:/home/docker/cp-test_ha-652424-m04_ha-652424-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-652424 ssh -n ha-652424-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-652424 ssh -n ha-652424-m03 "sudo cat /home/docker/cp-test_ha-652424-m04_ha-652424-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (14.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (14.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-652424 node stop m02 -v=7 --alsologtostderr
E0918 20:03:50.621099   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/functional-433731/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-652424 node stop m02 -v=7 --alsologtostderr: (13.318398111s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-652424 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-652424 status -v=7 --alsologtostderr: exit status 7 (686.921525ms)

                                                
                                                
-- stdout --
	ha-652424
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-652424-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-652424-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-652424-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 20:03:59.807807   30449 out.go:345] Setting OutFile to fd 1 ...
	I0918 20:03:59.808121   30449 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:03:59.808132   30449 out.go:358] Setting ErrFile to fd 2...
	I0918 20:03:59.808138   30449 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:03:59.808360   30449 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-7655/.minikube/bin
	I0918 20:03:59.808556   30449 out.go:352] Setting JSON to false
	I0918 20:03:59.808591   30449 mustload.go:65] Loading cluster: ha-652424
	I0918 20:03:59.808714   30449 notify.go:220] Checking for updates...
	I0918 20:03:59.809183   30449 config.go:182] Loaded profile config "ha-652424": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 20:03:59.809213   30449 status.go:174] checking status of ha-652424 ...
	I0918 20:03:59.809683   30449 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0918 20:03:59.809761   30449 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:03:59.829442   30449 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42239
	I0918 20:03:59.829970   30449 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:03:59.830604   30449 main.go:141] libmachine: Using API Version  1
	I0918 20:03:59.830631   30449 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:03:59.831071   30449 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:03:59.831375   30449 main.go:141] libmachine: (ha-652424) Calling .GetState
	I0918 20:03:59.833669   30449 status.go:364] ha-652424 host status = "Running" (err=<nil>)
	I0918 20:03:59.833689   30449 host.go:66] Checking if "ha-652424" exists ...
	I0918 20:03:59.834130   30449 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0918 20:03:59.834182   30449 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:03:59.851023   30449 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45213
	I0918 20:03:59.851501   30449 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:03:59.852093   30449 main.go:141] libmachine: Using API Version  1
	I0918 20:03:59.852120   30449 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:03:59.852464   30449 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:03:59.852669   30449 main.go:141] libmachine: (ha-652424) Calling .GetIP
	I0918 20:03:59.856613   30449 main.go:141] libmachine: (ha-652424) DBG | domain ha-652424 has defined MAC address 52:54:00:df:61:89 in network mk-ha-652424
	I0918 20:03:59.857143   30449 main.go:141] libmachine: (ha-652424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:61:89", ip: ""} in network mk-ha-652424: {Iface:virbr1 ExpiryTime:2024-09-18 20:58:37 +0000 UTC Type:0 Mac:52:54:00:df:61:89 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-652424 Clientid:01:52:54:00:df:61:89}
	I0918 20:03:59.857177   30449 main.go:141] libmachine: (ha-652424) DBG | domain ha-652424 has defined IP address 192.168.39.187 and MAC address 52:54:00:df:61:89 in network mk-ha-652424
	I0918 20:03:59.857406   30449 host.go:66] Checking if "ha-652424" exists ...
	I0918 20:03:59.857828   30449 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0918 20:03:59.857877   30449 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:03:59.874176   30449 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34709
	I0918 20:03:59.874716   30449 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:03:59.875410   30449 main.go:141] libmachine: Using API Version  1
	I0918 20:03:59.875433   30449 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:03:59.875769   30449 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:03:59.875948   30449 main.go:141] libmachine: (ha-652424) Calling .DriverName
	I0918 20:03:59.876136   30449 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0918 20:03:59.876158   30449 main.go:141] libmachine: (ha-652424) Calling .GetSSHHostname
	I0918 20:03:59.879212   30449 main.go:141] libmachine: (ha-652424) DBG | domain ha-652424 has defined MAC address 52:54:00:df:61:89 in network mk-ha-652424
	I0918 20:03:59.879667   30449 main.go:141] libmachine: (ha-652424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:61:89", ip: ""} in network mk-ha-652424: {Iface:virbr1 ExpiryTime:2024-09-18 20:58:37 +0000 UTC Type:0 Mac:52:54:00:df:61:89 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-652424 Clientid:01:52:54:00:df:61:89}
	I0918 20:03:59.879696   30449 main.go:141] libmachine: (ha-652424) DBG | domain ha-652424 has defined IP address 192.168.39.187 and MAC address 52:54:00:df:61:89 in network mk-ha-652424
	I0918 20:03:59.879833   30449 main.go:141] libmachine: (ha-652424) Calling .GetSSHPort
	I0918 20:03:59.880004   30449 main.go:141] libmachine: (ha-652424) Calling .GetSSHKeyPath
	I0918 20:03:59.880202   30449 main.go:141] libmachine: (ha-652424) Calling .GetSSHUsername
	I0918 20:03:59.880351   30449 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7655/.minikube/machines/ha-652424/id_rsa Username:docker}
	I0918 20:03:59.965909   30449 ssh_runner.go:195] Run: systemctl --version
	I0918 20:03:59.972893   30449 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 20:03:59.989954   30449 kubeconfig.go:125] found "ha-652424" server: "https://192.168.39.254:8443"
	I0918 20:03:59.989993   30449 api_server.go:166] Checking apiserver status ...
	I0918 20:03:59.990036   30449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 20:04:00.007863   30449 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1910/cgroup
	W0918 20:04:00.018838   30449 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1910/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0918 20:04:00.018896   30449 ssh_runner.go:195] Run: ls
	I0918 20:04:00.029741   30449 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0918 20:04:00.036260   30449 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0918 20:04:00.036290   30449 status.go:456] ha-652424 apiserver status = Running (err=<nil>)
	I0918 20:04:00.036301   30449 status.go:176] ha-652424 status: &{Name:ha-652424 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0918 20:04:00.036327   30449 status.go:174] checking status of ha-652424-m02 ...
	I0918 20:04:00.036679   30449 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0918 20:04:00.036730   30449 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:04:00.054476   30449 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39261
	I0918 20:04:00.054992   30449 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:04:00.055534   30449 main.go:141] libmachine: Using API Version  1
	I0918 20:04:00.055561   30449 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:04:00.055902   30449 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:04:00.056145   30449 main.go:141] libmachine: (ha-652424-m02) Calling .GetState
	I0918 20:04:00.057751   30449 status.go:364] ha-652424-m02 host status = "Stopped" (err=<nil>)
	I0918 20:04:00.057767   30449 status.go:377] host is not running, skipping remaining checks
	I0918 20:04:00.057773   30449 status.go:176] ha-652424-m02 status: &{Name:ha-652424-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0918 20:04:00.057794   30449 status.go:174] checking status of ha-652424-m03 ...
	I0918 20:04:00.058078   30449 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0918 20:04:00.058114   30449 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:04:00.073798   30449 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39799
	I0918 20:04:00.074242   30449 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:04:00.074771   30449 main.go:141] libmachine: Using API Version  1
	I0918 20:04:00.074798   30449 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:04:00.075222   30449 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:04:00.075493   30449 main.go:141] libmachine: (ha-652424-m03) Calling .GetState
	I0918 20:04:00.077512   30449 status.go:364] ha-652424-m03 host status = "Running" (err=<nil>)
	I0918 20:04:00.077534   30449 host.go:66] Checking if "ha-652424-m03" exists ...
	I0918 20:04:00.077858   30449 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0918 20:04:00.077914   30449 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:04:00.093384   30449 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43433
	I0918 20:04:00.093881   30449 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:04:00.094393   30449 main.go:141] libmachine: Using API Version  1
	I0918 20:04:00.094415   30449 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:04:00.094786   30449 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:04:00.094993   30449 main.go:141] libmachine: (ha-652424-m03) Calling .GetIP
	I0918 20:04:00.098080   30449 main.go:141] libmachine: (ha-652424-m03) DBG | domain ha-652424-m03 has defined MAC address 52:54:00:70:e0:1d in network mk-ha-652424
	I0918 20:04:00.098580   30449 main.go:141] libmachine: (ha-652424-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:e0:1d", ip: ""} in network mk-ha-652424: {Iface:virbr1 ExpiryTime:2024-09-18 21:00:56 +0000 UTC Type:0 Mac:52:54:00:70:e0:1d Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ha-652424-m03 Clientid:01:52:54:00:70:e0:1d}
	I0918 20:04:00.098608   30449 main.go:141] libmachine: (ha-652424-m03) DBG | domain ha-652424-m03 has defined IP address 192.168.39.221 and MAC address 52:54:00:70:e0:1d in network mk-ha-652424
	I0918 20:04:00.098743   30449 host.go:66] Checking if "ha-652424-m03" exists ...
	I0918 20:04:00.099079   30449 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0918 20:04:00.099143   30449 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:04:00.114531   30449 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35713
	I0918 20:04:00.115050   30449 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:04:00.115690   30449 main.go:141] libmachine: Using API Version  1
	I0918 20:04:00.115718   30449 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:04:00.116111   30449 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:04:00.116290   30449 main.go:141] libmachine: (ha-652424-m03) Calling .DriverName
	I0918 20:04:00.116452   30449 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0918 20:04:00.116473   30449 main.go:141] libmachine: (ha-652424-m03) Calling .GetSSHHostname
	I0918 20:04:00.119124   30449 main.go:141] libmachine: (ha-652424-m03) DBG | domain ha-652424-m03 has defined MAC address 52:54:00:70:e0:1d in network mk-ha-652424
	I0918 20:04:00.119529   30449 main.go:141] libmachine: (ha-652424-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:e0:1d", ip: ""} in network mk-ha-652424: {Iface:virbr1 ExpiryTime:2024-09-18 21:00:56 +0000 UTC Type:0 Mac:52:54:00:70:e0:1d Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ha-652424-m03 Clientid:01:52:54:00:70:e0:1d}
	I0918 20:04:00.119555   30449 main.go:141] libmachine: (ha-652424-m03) DBG | domain ha-652424-m03 has defined IP address 192.168.39.221 and MAC address 52:54:00:70:e0:1d in network mk-ha-652424
	I0918 20:04:00.119649   30449 main.go:141] libmachine: (ha-652424-m03) Calling .GetSSHPort
	I0918 20:04:00.119799   30449 main.go:141] libmachine: (ha-652424-m03) Calling .GetSSHKeyPath
	I0918 20:04:00.119917   30449 main.go:141] libmachine: (ha-652424-m03) Calling .GetSSHUsername
	I0918 20:04:00.120008   30449 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7655/.minikube/machines/ha-652424-m03/id_rsa Username:docker}
	I0918 20:04:00.209464   30449 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 20:04:00.227211   30449 kubeconfig.go:125] found "ha-652424" server: "https://192.168.39.254:8443"
	I0918 20:04:00.227242   30449 api_server.go:166] Checking apiserver status ...
	I0918 20:04:00.227283   30449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 20:04:00.243155   30449 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1857/cgroup
	W0918 20:04:00.262451   30449 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1857/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0918 20:04:00.262550   30449 ssh_runner.go:195] Run: ls
	I0918 20:04:00.268668   30449 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0918 20:04:00.273227   30449 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0918 20:04:00.273251   30449 status.go:456] ha-652424-m03 apiserver status = Running (err=<nil>)
	I0918 20:04:00.273259   30449 status.go:176] ha-652424-m03 status: &{Name:ha-652424-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0918 20:04:00.273274   30449 status.go:174] checking status of ha-652424-m04 ...
	I0918 20:04:00.273574   30449 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0918 20:04:00.273613   30449 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:04:00.289707   30449 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43383
	I0918 20:04:00.290161   30449 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:04:00.290651   30449 main.go:141] libmachine: Using API Version  1
	I0918 20:04:00.290680   30449 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:04:00.291046   30449 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:04:00.291274   30449 main.go:141] libmachine: (ha-652424-m04) Calling .GetState
	I0918 20:04:00.292854   30449 status.go:364] ha-652424-m04 host status = "Running" (err=<nil>)
	I0918 20:04:00.292870   30449 host.go:66] Checking if "ha-652424-m04" exists ...
	I0918 20:04:00.293203   30449 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0918 20:04:00.293265   30449 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:04:00.310348   30449 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34781
	I0918 20:04:00.310861   30449 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:04:00.311407   30449 main.go:141] libmachine: Using API Version  1
	I0918 20:04:00.311439   30449 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:04:00.311841   30449 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:04:00.312015   30449 main.go:141] libmachine: (ha-652424-m04) Calling .GetIP
	I0918 20:04:00.315671   30449 main.go:141] libmachine: (ha-652424-m04) DBG | domain ha-652424-m04 has defined MAC address 52:54:00:9a:31:8d in network mk-ha-652424
	I0918 20:04:00.316111   30449 main.go:141] libmachine: (ha-652424-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:31:8d", ip: ""} in network mk-ha-652424: {Iface:virbr1 ExpiryTime:2024-09-18 21:02:37 +0000 UTC Type:0 Mac:52:54:00:9a:31:8d Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:ha-652424-m04 Clientid:01:52:54:00:9a:31:8d}
	I0918 20:04:00.316137   30449 main.go:141] libmachine: (ha-652424-m04) DBG | domain ha-652424-m04 has defined IP address 192.168.39.242 and MAC address 52:54:00:9a:31:8d in network mk-ha-652424
	I0918 20:04:00.316314   30449 host.go:66] Checking if "ha-652424-m04" exists ...
	I0918 20:04:00.316620   30449 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0918 20:04:00.316673   30449 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:04:00.332452   30449 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35523
	I0918 20:04:00.332983   30449 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:04:00.333504   30449 main.go:141] libmachine: Using API Version  1
	I0918 20:04:00.333528   30449 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:04:00.333892   30449 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:04:00.334097   30449 main.go:141] libmachine: (ha-652424-m04) Calling .DriverName
	I0918 20:04:00.334301   30449 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0918 20:04:00.334325   30449 main.go:141] libmachine: (ha-652424-m04) Calling .GetSSHHostname
	I0918 20:04:00.338151   30449 main.go:141] libmachine: (ha-652424-m04) DBG | domain ha-652424-m04 has defined MAC address 52:54:00:9a:31:8d in network mk-ha-652424
	I0918 20:04:00.338720   30449 main.go:141] libmachine: (ha-652424-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:31:8d", ip: ""} in network mk-ha-652424: {Iface:virbr1 ExpiryTime:2024-09-18 21:02:37 +0000 UTC Type:0 Mac:52:54:00:9a:31:8d Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:ha-652424-m04 Clientid:01:52:54:00:9a:31:8d}
	I0918 20:04:00.338767   30449 main.go:141] libmachine: (ha-652424-m04) DBG | domain ha-652424-m04 has defined IP address 192.168.39.242 and MAC address 52:54:00:9a:31:8d in network mk-ha-652424
	I0918 20:04:00.338977   30449 main.go:141] libmachine: (ha-652424-m04) Calling .GetSSHPort
	I0918 20:04:00.339190   30449 main.go:141] libmachine: (ha-652424-m04) Calling .GetSSHKeyPath
	I0918 20:04:00.339334   30449 main.go:141] libmachine: (ha-652424-m04) Calling .GetSSHUsername
	I0918 20:04:00.339492   30449 sshutil.go:53] new ssh client: &{IP:192.168.39.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7655/.minikube/machines/ha-652424-m04/id_rsa Username:docker}
	I0918 20:04:00.425000   30449 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 20:04:00.442847   30449 status.go:176] ha-652424-m04 status: &{Name:ha-652424-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (14.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (49.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-652424 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p ha-652424 node start m02 -v=7 --alsologtostderr: (48.445955815s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-652424 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (49.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (202.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-652424 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-652424 -v=7 --alsologtostderr
E0918 20:05:12.543139   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/functional-433731/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-amd64 stop -p ha-652424 -v=7 --alsologtostderr: (42.44255163s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-652424 --wait=true -v=7 --alsologtostderr
E0918 20:07:28.681893   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/functional-433731/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:07:46.915679   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/addons-656419/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:07:56.385076   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/functional-433731/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-652424 --wait=true -v=7 --alsologtostderr: (2m39.660080475s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-652424
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (202.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (8.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-652424 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-652424 node delete m03 -v=7 --alsologtostderr: (7.24455239s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-652424 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (8.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (38.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-652424 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-amd64 -p ha-652424 stop -v=7 --alsologtostderr: (38.258104223s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-652424 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-652424 status -v=7 --alsologtostderr: exit status 7 (105.797544ms)

                                                
                                                
-- stdout --
	ha-652424
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-652424-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-652424-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 20:09:00.684869   32834 out.go:345] Setting OutFile to fd 1 ...
	I0918 20:09:00.685165   32834 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:09:00.685175   32834 out.go:358] Setting ErrFile to fd 2...
	I0918 20:09:00.685180   32834 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:09:00.685376   32834 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-7655/.minikube/bin
	I0918 20:09:00.685568   32834 out.go:352] Setting JSON to false
	I0918 20:09:00.685602   32834 mustload.go:65] Loading cluster: ha-652424
	I0918 20:09:00.685656   32834 notify.go:220] Checking for updates...
	I0918 20:09:00.686224   32834 config.go:182] Loaded profile config "ha-652424": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 20:09:00.686251   32834 status.go:174] checking status of ha-652424 ...
	I0918 20:09:00.686755   32834 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0918 20:09:00.686817   32834 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:09:00.702049   32834 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38567
	I0918 20:09:00.702612   32834 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:09:00.703266   32834 main.go:141] libmachine: Using API Version  1
	I0918 20:09:00.703289   32834 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:09:00.703702   32834 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:09:00.703917   32834 main.go:141] libmachine: (ha-652424) Calling .GetState
	I0918 20:09:00.705706   32834 status.go:364] ha-652424 host status = "Stopped" (err=<nil>)
	I0918 20:09:00.705724   32834 status.go:377] host is not running, skipping remaining checks
	I0918 20:09:00.705731   32834 status.go:176] ha-652424 status: &{Name:ha-652424 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0918 20:09:00.705768   32834 status.go:174] checking status of ha-652424-m02 ...
	I0918 20:09:00.706080   32834 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0918 20:09:00.706123   32834 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:09:00.722258   32834 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40397
	I0918 20:09:00.722728   32834 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:09:00.723393   32834 main.go:141] libmachine: Using API Version  1
	I0918 20:09:00.723416   32834 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:09:00.723836   32834 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:09:00.724061   32834 main.go:141] libmachine: (ha-652424-m02) Calling .GetState
	I0918 20:09:00.726013   32834 status.go:364] ha-652424-m02 host status = "Stopped" (err=<nil>)
	I0918 20:09:00.726028   32834 status.go:377] host is not running, skipping remaining checks
	I0918 20:09:00.726033   32834 status.go:176] ha-652424-m02 status: &{Name:ha-652424-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0918 20:09:00.726056   32834 status.go:174] checking status of ha-652424-m04 ...
	I0918 20:09:00.726339   32834 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0918 20:09:00.726377   32834 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:09:00.741525   32834 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37927
	I0918 20:09:00.741975   32834 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:09:00.742589   32834 main.go:141] libmachine: Using API Version  1
	I0918 20:09:00.742615   32834 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:09:00.743022   32834 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:09:00.743248   32834 main.go:141] libmachine: (ha-652424-m04) Calling .GetState
	I0918 20:09:00.744979   32834 status.go:364] ha-652424-m04 host status = "Stopped" (err=<nil>)
	I0918 20:09:00.744997   32834 status.go:377] host is not running, skipping remaining checks
	I0918 20:09:00.745004   32834 status.go:176] ha-652424-m04 status: &{Name:ha-652424-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (38.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (119.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-652424 --wait=true -v=7 --alsologtostderr --driver=kvm2 
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-652424 --wait=true -v=7 --alsologtostderr --driver=kvm2 : (1m59.031689178s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-652424 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (119.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (84.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-652424 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-652424 --control-plane -v=7 --alsologtostderr: (1m23.310170422s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-652424 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (84.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.93s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (53.29s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-832791 --driver=kvm2 
E0918 20:12:46.918720   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/addons-656419/client.crt: no such file or directory" logger="UnhandledError"
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-832791 --driver=kvm2 : (53.2891561s)
--- PASS: TestImageBuild/serial/Setup (53.29s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (2.45s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-832791
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-832791: (2.45346153s)
--- PASS: TestImageBuild/serial/NormalBuild (2.45s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.46s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-832791
image_test.go:99: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-832791: (1.454980255s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.46s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (1s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-832791
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (1.00s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.91s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-832791
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.91s)

                                                
                                    
x
+
TestJSONOutput/start/Command (92.53s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-935341 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 
E0918 20:14:09.980669   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/addons-656419/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-935341 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 : (1m32.532447409s)
--- PASS: TestJSONOutput/start/Command (92.53s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-935341 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.55s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-935341 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.55s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (12.66s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-935341 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-935341 --output=json --user=testUser: (12.657752s)
--- PASS: TestJSONOutput/stop/Command (12.66s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-686706 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-686706 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (66.334162ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"53726419-4dbe-4e84-a624-0eca94b1db96","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-686706] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"5d2f2cc5-1e3b-4fc7-8337-7804826168d4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19667"}}
	{"specversion":"1.0","id":"9fc37f01-6308-4646-a672-ae0587cb1468","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ed300dd0-afb2-488f-8ec9-67fb28a8d1f9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19667-7655/kubeconfig"}}
	{"specversion":"1.0","id":"cc5d8827-0b3b-44be-b512-06b80e5dff86","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-7655/.minikube"}}
	{"specversion":"1.0","id":"d9210195-afcd-49a6-8092-f85de15934a8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"b50a8467-fc2c-4eb7-bfd3-87b527d28c78","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"28cac2de-2ce6-4856-85e0-369e50f60b40","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-686706" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-686706
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (112.85s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-517654 --driver=kvm2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-517654 --driver=kvm2 : (51.497339489s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-575966 --driver=kvm2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-575966 --driver=kvm2 : (58.146466581s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-517654
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-575966
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-575966" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-575966
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-575966: (1.012988842s)
helpers_test.go:175: Cleaning up "first-517654" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-517654
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-517654: (1.030802605s)
--- PASS: TestMinikubeProfile (112.85s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (32.16s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-105620 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 
E0918 20:17:28.682320   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/functional-433731/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-105620 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 : (31.154944999s)
--- PASS: TestMountStart/serial/StartWithMountFirst (32.16s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.62s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-105620 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-105620 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.62s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (32.84s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-125800 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 
E0918 20:17:46.918231   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/addons-656419/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-125800 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 : (31.842948911s)
--- PASS: TestMountStart/serial/StartWithMountSecond (32.84s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-125800 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-125800 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.89s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-105620 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.89s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-125800 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-125800 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-125800
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-125800: (2.282289056s)
--- PASS: TestMountStart/serial/Stop (2.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (25.7s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-125800
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-125800: (24.701545906s)
--- PASS: TestMountStart/serial/RestartStopped (25.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-125800 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-125800 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (137.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-391998 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 
E0918 20:18:51.746433   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/functional-433731/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-391998 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 : (2m17.30590731s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391998 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (137.73s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-391998 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-391998 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-391998 -- rollout status deployment/busybox: (2.487364887s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-391998 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-391998 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-391998 -- exec busybox-7dff88458-fk5nl -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-391998 -- exec busybox-7dff88458-jrsqn -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-391998 -- exec busybox-7dff88458-fk5nl -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-391998 -- exec busybox-7dff88458-jrsqn -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-391998 -- exec busybox-7dff88458-fk5nl -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-391998 -- exec busybox-7dff88458-jrsqn -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.18s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-391998 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-391998 -- exec busybox-7dff88458-fk5nl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-391998 -- exec busybox-7dff88458-fk5nl -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-391998 -- exec busybox-7dff88458-jrsqn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-391998 -- exec busybox-7dff88458-jrsqn -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.84s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (58.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-391998 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-391998 -v 3 --alsologtostderr: (58.070464454s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391998 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (58.68s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-391998 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.60s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391998 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391998 cp testdata/cp-test.txt multinode-391998:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391998 ssh -n multinode-391998 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391998 cp multinode-391998:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1838697285/001/cp-test_multinode-391998.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391998 ssh -n multinode-391998 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391998 cp multinode-391998:/home/docker/cp-test.txt multinode-391998-m02:/home/docker/cp-test_multinode-391998_multinode-391998-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391998 ssh -n multinode-391998 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391998 ssh -n multinode-391998-m02 "sudo cat /home/docker/cp-test_multinode-391998_multinode-391998-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391998 cp multinode-391998:/home/docker/cp-test.txt multinode-391998-m03:/home/docker/cp-test_multinode-391998_multinode-391998-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391998 ssh -n multinode-391998 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391998 ssh -n multinode-391998-m03 "sudo cat /home/docker/cp-test_multinode-391998_multinode-391998-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391998 cp testdata/cp-test.txt multinode-391998-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391998 ssh -n multinode-391998-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391998 cp multinode-391998-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1838697285/001/cp-test_multinode-391998-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391998 ssh -n multinode-391998-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391998 cp multinode-391998-m02:/home/docker/cp-test.txt multinode-391998:/home/docker/cp-test_multinode-391998-m02_multinode-391998.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391998 ssh -n multinode-391998-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391998 ssh -n multinode-391998 "sudo cat /home/docker/cp-test_multinode-391998-m02_multinode-391998.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391998 cp multinode-391998-m02:/home/docker/cp-test.txt multinode-391998-m03:/home/docker/cp-test_multinode-391998-m02_multinode-391998-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391998 ssh -n multinode-391998-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391998 ssh -n multinode-391998-m03 "sudo cat /home/docker/cp-test_multinode-391998-m02_multinode-391998-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391998 cp testdata/cp-test.txt multinode-391998-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391998 ssh -n multinode-391998-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391998 cp multinode-391998-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1838697285/001/cp-test_multinode-391998-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391998 ssh -n multinode-391998-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391998 cp multinode-391998-m03:/home/docker/cp-test.txt multinode-391998:/home/docker/cp-test_multinode-391998-m03_multinode-391998.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391998 ssh -n multinode-391998-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391998 ssh -n multinode-391998 "sudo cat /home/docker/cp-test_multinode-391998-m03_multinode-391998.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391998 cp multinode-391998-m03:/home/docker/cp-test.txt multinode-391998-m02:/home/docker/cp-test_multinode-391998-m03_multinode-391998-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391998 ssh -n multinode-391998-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391998 ssh -n multinode-391998-m02 "sudo cat /home/docker/cp-test_multinode-391998-m03_multinode-391998-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.70s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391998 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-391998 node stop m03: (2.517599185s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391998 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-391998 status: exit status 7 (451.352648ms)

                                                
                                                
-- stdout --
	multinode-391998
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-391998-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-391998-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391998 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-391998 status --alsologtostderr: exit status 7 (448.374761ms)

                                                
                                                
-- stdout --
	multinode-391998
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-391998-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-391998-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 20:22:19.632780   41832 out.go:345] Setting OutFile to fd 1 ...
	I0918 20:22:19.632878   41832 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:22:19.632885   41832 out.go:358] Setting ErrFile to fd 2...
	I0918 20:22:19.632890   41832 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:22:19.633128   41832 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-7655/.minikube/bin
	I0918 20:22:19.633288   41832 out.go:352] Setting JSON to false
	I0918 20:22:19.633312   41832 mustload.go:65] Loading cluster: multinode-391998
	I0918 20:22:19.633359   41832 notify.go:220] Checking for updates...
	I0918 20:22:19.633738   41832 config.go:182] Loaded profile config "multinode-391998": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 20:22:19.633756   41832 status.go:174] checking status of multinode-391998 ...
	I0918 20:22:19.634162   41832 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0918 20:22:19.634215   41832 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:22:19.650318   41832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35141
	I0918 20:22:19.650928   41832 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:22:19.651608   41832 main.go:141] libmachine: Using API Version  1
	I0918 20:22:19.651636   41832 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:22:19.652021   41832 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:22:19.652250   41832 main.go:141] libmachine: (multinode-391998) Calling .GetState
	I0918 20:22:19.653944   41832 status.go:364] multinode-391998 host status = "Running" (err=<nil>)
	I0918 20:22:19.653968   41832 host.go:66] Checking if "multinode-391998" exists ...
	I0918 20:22:19.654283   41832 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0918 20:22:19.654326   41832 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:22:19.670046   41832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42067
	I0918 20:22:19.670584   41832 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:22:19.671182   41832 main.go:141] libmachine: Using API Version  1
	I0918 20:22:19.671205   41832 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:22:19.671542   41832 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:22:19.671754   41832 main.go:141] libmachine: (multinode-391998) Calling .GetIP
	I0918 20:22:19.674795   41832 main.go:141] libmachine: (multinode-391998) DBG | domain multinode-391998 has defined MAC address 52:54:00:04:a7:b9 in network mk-multinode-391998
	I0918 20:22:19.675261   41832 main.go:141] libmachine: (multinode-391998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:a7:b9", ip: ""} in network mk-multinode-391998: {Iface:virbr1 ExpiryTime:2024-09-18 21:19:02 +0000 UTC Type:0 Mac:52:54:00:04:a7:b9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:multinode-391998 Clientid:01:52:54:00:04:a7:b9}
	I0918 20:22:19.675295   41832 main.go:141] libmachine: (multinode-391998) DBG | domain multinode-391998 has defined IP address 192.168.39.158 and MAC address 52:54:00:04:a7:b9 in network mk-multinode-391998
	I0918 20:22:19.675431   41832 host.go:66] Checking if "multinode-391998" exists ...
	I0918 20:22:19.675746   41832 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0918 20:22:19.675787   41832 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:22:19.691642   41832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41143
	I0918 20:22:19.692275   41832 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:22:19.692935   41832 main.go:141] libmachine: Using API Version  1
	I0918 20:22:19.692976   41832 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:22:19.693315   41832 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:22:19.693604   41832 main.go:141] libmachine: (multinode-391998) Calling .DriverName
	I0918 20:22:19.693827   41832 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0918 20:22:19.693858   41832 main.go:141] libmachine: (multinode-391998) Calling .GetSSHHostname
	I0918 20:22:19.697017   41832 main.go:141] libmachine: (multinode-391998) DBG | domain multinode-391998 has defined MAC address 52:54:00:04:a7:b9 in network mk-multinode-391998
	I0918 20:22:19.697470   41832 main.go:141] libmachine: (multinode-391998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:a7:b9", ip: ""} in network mk-multinode-391998: {Iface:virbr1 ExpiryTime:2024-09-18 21:19:02 +0000 UTC Type:0 Mac:52:54:00:04:a7:b9 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:multinode-391998 Clientid:01:52:54:00:04:a7:b9}
	I0918 20:22:19.697497   41832 main.go:141] libmachine: (multinode-391998) DBG | domain multinode-391998 has defined IP address 192.168.39.158 and MAC address 52:54:00:04:a7:b9 in network mk-multinode-391998
	I0918 20:22:19.697647   41832 main.go:141] libmachine: (multinode-391998) Calling .GetSSHPort
	I0918 20:22:19.697828   41832 main.go:141] libmachine: (multinode-391998) Calling .GetSSHKeyPath
	I0918 20:22:19.697968   41832 main.go:141] libmachine: (multinode-391998) Calling .GetSSHUsername
	I0918 20:22:19.698075   41832 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7655/.minikube/machines/multinode-391998/id_rsa Username:docker}
	I0918 20:22:19.786114   41832 ssh_runner.go:195] Run: systemctl --version
	I0918 20:22:19.792989   41832 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 20:22:19.809336   41832 kubeconfig.go:125] found "multinode-391998" server: "https://192.168.39.158:8443"
	I0918 20:22:19.809379   41832 api_server.go:166] Checking apiserver status ...
	I0918 20:22:19.809438   41832 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 20:22:19.831387   41832 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1888/cgroup
	W0918 20:22:19.842551   41832 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1888/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0918 20:22:19.842629   41832 ssh_runner.go:195] Run: ls
	I0918 20:22:19.847512   41832 api_server.go:253] Checking apiserver healthz at https://192.168.39.158:8443/healthz ...
	I0918 20:22:19.851757   41832 api_server.go:279] https://192.168.39.158:8443/healthz returned 200:
	ok
	I0918 20:22:19.851783   41832 status.go:456] multinode-391998 apiserver status = Running (err=<nil>)
	I0918 20:22:19.851796   41832 status.go:176] multinode-391998 status: &{Name:multinode-391998 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0918 20:22:19.851816   41832 status.go:174] checking status of multinode-391998-m02 ...
	I0918 20:22:19.852153   41832 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0918 20:22:19.852200   41832 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:22:19.867845   41832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42521
	I0918 20:22:19.868346   41832 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:22:19.868856   41832 main.go:141] libmachine: Using API Version  1
	I0918 20:22:19.868872   41832 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:22:19.869286   41832 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:22:19.869486   41832 main.go:141] libmachine: (multinode-391998-m02) Calling .GetState
	I0918 20:22:19.871183   41832 status.go:364] multinode-391998-m02 host status = "Running" (err=<nil>)
	I0918 20:22:19.871206   41832 host.go:66] Checking if "multinode-391998-m02" exists ...
	I0918 20:22:19.871576   41832 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0918 20:22:19.871671   41832 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:22:19.887449   41832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40451
	I0918 20:22:19.888098   41832 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:22:19.888643   41832 main.go:141] libmachine: Using API Version  1
	I0918 20:22:19.888666   41832 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:22:19.889055   41832 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:22:19.889290   41832 main.go:141] libmachine: (multinode-391998-m02) Calling .GetIP
	I0918 20:22:19.892201   41832 main.go:141] libmachine: (multinode-391998-m02) DBG | domain multinode-391998-m02 has defined MAC address 52:54:00:31:3e:87 in network mk-multinode-391998
	I0918 20:22:19.892899   41832 main.go:141] libmachine: (multinode-391998-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:3e:87", ip: ""} in network mk-multinode-391998: {Iface:virbr1 ExpiryTime:2024-09-18 21:20:19 +0000 UTC Type:0 Mac:52:54:00:31:3e:87 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:multinode-391998-m02 Clientid:01:52:54:00:31:3e:87}
	I0918 20:22:19.892935   41832 main.go:141] libmachine: (multinode-391998-m02) DBG | domain multinode-391998-m02 has defined IP address 192.168.39.66 and MAC address 52:54:00:31:3e:87 in network mk-multinode-391998
	I0918 20:22:19.893107   41832 host.go:66] Checking if "multinode-391998-m02" exists ...
	I0918 20:22:19.893456   41832 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0918 20:22:19.893498   41832 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:22:19.908797   41832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43721
	I0918 20:22:19.909348   41832 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:22:19.909910   41832 main.go:141] libmachine: Using API Version  1
	I0918 20:22:19.909936   41832 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:22:19.910270   41832 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:22:19.910447   41832 main.go:141] libmachine: (multinode-391998-m02) Calling .DriverName
	I0918 20:22:19.910629   41832 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0918 20:22:19.910650   41832 main.go:141] libmachine: (multinode-391998-m02) Calling .GetSSHHostname
	I0918 20:22:19.913642   41832 main.go:141] libmachine: (multinode-391998-m02) DBG | domain multinode-391998-m02 has defined MAC address 52:54:00:31:3e:87 in network mk-multinode-391998
	I0918 20:22:19.914181   41832 main.go:141] libmachine: (multinode-391998-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:3e:87", ip: ""} in network mk-multinode-391998: {Iface:virbr1 ExpiryTime:2024-09-18 21:20:19 +0000 UTC Type:0 Mac:52:54:00:31:3e:87 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:multinode-391998-m02 Clientid:01:52:54:00:31:3e:87}
	I0918 20:22:19.914207   41832 main.go:141] libmachine: (multinode-391998-m02) DBG | domain multinode-391998-m02 has defined IP address 192.168.39.66 and MAC address 52:54:00:31:3e:87 in network mk-multinode-391998
	I0918 20:22:19.914369   41832 main.go:141] libmachine: (multinode-391998-m02) Calling .GetSSHPort
	I0918 20:22:19.914535   41832 main.go:141] libmachine: (multinode-391998-m02) Calling .GetSSHKeyPath
	I0918 20:22:19.914704   41832 main.go:141] libmachine: (multinode-391998-m02) Calling .GetSSHUsername
	I0918 20:22:19.914814   41832 sshutil.go:53] new ssh client: &{IP:192.168.39.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7655/.minikube/machines/multinode-391998-m02/id_rsa Username:docker}
	I0918 20:22:20.000597   41832 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 20:22:20.015858   41832 status.go:176] multinode-391998-m02 status: &{Name:multinode-391998-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0918 20:22:20.015895   41832 status.go:174] checking status of multinode-391998-m03 ...
	I0918 20:22:20.016272   41832 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0918 20:22:20.016315   41832 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:22:20.033159   41832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38373
	I0918 20:22:20.033738   41832 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:22:20.034264   41832 main.go:141] libmachine: Using API Version  1
	I0918 20:22:20.034284   41832 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:22:20.034594   41832 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:22:20.034795   41832 main.go:141] libmachine: (multinode-391998-m03) Calling .GetState
	I0918 20:22:20.036358   41832 status.go:364] multinode-391998-m03 host status = "Stopped" (err=<nil>)
	I0918 20:22:20.036371   41832 status.go:377] host is not running, skipping remaining checks
	I0918 20:22:20.036377   41832 status.go:176] multinode-391998-m03 status: &{Name:multinode-391998-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.42s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (43.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391998 node start m03 -v=7 --alsologtostderr
E0918 20:22:28.682886   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/functional-433731/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:22:46.915426   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/addons-656419/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-391998 node start m03 -v=7 --alsologtostderr: (42.842747535s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391998 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (43.51s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (192.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-391998
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-391998
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-391998: (28.100462417s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-391998 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-391998 --wait=true -v=8 --alsologtostderr: (2m44.065056389s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-391998
--- PASS: TestMultiNode/serial/RestartKeepsNodes (192.27s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391998 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-391998 node delete m03: (1.879800561s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391998 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.43s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (25.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391998 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-391998 stop: (24.927856481s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391998 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-391998 status: exit status 7 (86.168758ms)

                                                
                                                
-- stdout --
	multinode-391998
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-391998-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391998 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-391998 status --alsologtostderr: exit status 7 (87.443455ms)

                                                
                                                
-- stdout --
	multinode-391998
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-391998-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 20:26:43.303904   43649 out.go:345] Setting OutFile to fd 1 ...
	I0918 20:26:43.304368   43649 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:26:43.304386   43649 out.go:358] Setting ErrFile to fd 2...
	I0918 20:26:43.304394   43649 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:26:43.304875   43649 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-7655/.minikube/bin
	I0918 20:26:43.305303   43649 out.go:352] Setting JSON to false
	I0918 20:26:43.305347   43649 mustload.go:65] Loading cluster: multinode-391998
	I0918 20:26:43.305416   43649 notify.go:220] Checking for updates...
	I0918 20:26:43.305995   43649 config.go:182] Loaded profile config "multinode-391998": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 20:26:43.306020   43649 status.go:174] checking status of multinode-391998 ...
	I0918 20:26:43.306458   43649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0918 20:26:43.306500   43649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:26:43.321925   43649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40205
	I0918 20:26:43.322452   43649 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:26:43.322999   43649 main.go:141] libmachine: Using API Version  1
	I0918 20:26:43.323014   43649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:26:43.323356   43649 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:26:43.323541   43649 main.go:141] libmachine: (multinode-391998) Calling .GetState
	I0918 20:26:43.325508   43649 status.go:364] multinode-391998 host status = "Stopped" (err=<nil>)
	I0918 20:26:43.325523   43649 status.go:377] host is not running, skipping remaining checks
	I0918 20:26:43.325528   43649 status.go:176] multinode-391998 status: &{Name:multinode-391998 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0918 20:26:43.325564   43649 status.go:174] checking status of multinode-391998-m02 ...
	I0918 20:26:43.325902   43649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0918 20:26:43.325944   43649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:26:43.342050   43649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36203
	I0918 20:26:43.342718   43649 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:26:43.343285   43649 main.go:141] libmachine: Using API Version  1
	I0918 20:26:43.343307   43649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:26:43.343676   43649 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:26:43.343917   43649 main.go:141] libmachine: (multinode-391998-m02) Calling .GetState
	I0918 20:26:43.345872   43649 status.go:364] multinode-391998-m02 host status = "Stopped" (err=<nil>)
	I0918 20:26:43.345892   43649 status.go:377] host is not running, skipping remaining checks
	I0918 20:26:43.345899   43649 status.go:176] multinode-391998-m02 status: &{Name:multinode-391998-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (25.10s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (151.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-391998 --wait=true -v=8 --alsologtostderr --driver=kvm2 
E0918 20:27:28.682745   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/functional-433731/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:27:46.915434   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/addons-656419/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-391998 --wait=true -v=8 --alsologtostderr --driver=kvm2 : (2m30.489383538s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-391998 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (151.04s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (55.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-391998
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-391998-m02 --driver=kvm2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-391998-m02 --driver=kvm2 : exit status 14 (71.810721ms)

                                                
                                                
-- stdout --
	* [multinode-391998-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19667
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19667-7655/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-7655/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-391998-m02' is duplicated with machine name 'multinode-391998-m02' in profile 'multinode-391998'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-391998-m03 --driver=kvm2 
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-391998-m03 --driver=kvm2 : (53.707850928s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-391998
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-391998: exit status 80 (222.710482ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-391998 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-391998-m03 already exists in multinode-391998-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-391998-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (55.03s)

                                                
                                    
x
+
TestPreload (153.14s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-266298 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4
E0918 20:30:49.982792   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/addons-656419/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-266298 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4: (1m26.382608904s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-266298 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-266298 image pull gcr.io/k8s-minikube/busybox: (1.626941324s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-266298
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-266298: (12.6599031s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-266298 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 
E0918 20:32:28.683033   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/functional-433731/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-266298 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 : (51.209036579s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-266298 image list
helpers_test.go:175: Cleaning up "test-preload-266298" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-266298
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-266298: (1.064905684s)
--- PASS: TestPreload (153.14s)

                                                
                                    
x
+
TestScheduledStopUnix (122.43s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-820110 --memory=2048 --driver=kvm2 
E0918 20:32:46.916790   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/addons-656419/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-820110 --memory=2048 --driver=kvm2 : (50.71420665s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-820110 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-820110 -n scheduled-stop-820110
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-820110 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0918 20:33:35.430025   14866 retry.go:31] will retry after 114.568µs: open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/scheduled-stop-820110/pid: no such file or directory
I0918 20:33:35.431228   14866 retry.go:31] will retry after 142.633µs: open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/scheduled-stop-820110/pid: no such file or directory
I0918 20:33:35.432436   14866 retry.go:31] will retry after 315.949µs: open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/scheduled-stop-820110/pid: no such file or directory
I0918 20:33:35.433582   14866 retry.go:31] will retry after 488.331µs: open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/scheduled-stop-820110/pid: no such file or directory
I0918 20:33:35.434676   14866 retry.go:31] will retry after 462.559µs: open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/scheduled-stop-820110/pid: no such file or directory
I0918 20:33:35.435810   14866 retry.go:31] will retry after 750.71µs: open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/scheduled-stop-820110/pid: no such file or directory
I0918 20:33:35.437032   14866 retry.go:31] will retry after 1.338495ms: open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/scheduled-stop-820110/pid: no such file or directory
I0918 20:33:35.439280   14866 retry.go:31] will retry after 1.984662ms: open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/scheduled-stop-820110/pid: no such file or directory
I0918 20:33:35.441589   14866 retry.go:31] will retry after 2.28004ms: open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/scheduled-stop-820110/pid: no such file or directory
I0918 20:33:35.444920   14866 retry.go:31] will retry after 2.062491ms: open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/scheduled-stop-820110/pid: no such file or directory
I0918 20:33:35.447111   14866 retry.go:31] will retry after 3.596989ms: open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/scheduled-stop-820110/pid: no such file or directory
I0918 20:33:35.451389   14866 retry.go:31] will retry after 10.130581ms: open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/scheduled-stop-820110/pid: no such file or directory
I0918 20:33:35.462670   14866 retry.go:31] will retry after 8.966332ms: open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/scheduled-stop-820110/pid: no such file or directory
I0918 20:33:35.471999   14866 retry.go:31] will retry after 22.917159ms: open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/scheduled-stop-820110/pid: no such file or directory
I0918 20:33:35.495299   14866 retry.go:31] will retry after 15.789322ms: open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/scheduled-stop-820110/pid: no such file or directory
I0918 20:33:35.511600   14866 retry.go:31] will retry after 63.360005ms: open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/scheduled-stop-820110/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-820110 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-820110 -n scheduled-stop-820110
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-820110
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-820110 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-820110
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-820110: exit status 7 (66.829559ms)

                                                
                                                
-- stdout --
	scheduled-stop-820110
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-820110 -n scheduled-stop-820110
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-820110 -n scheduled-stop-820110: exit status 7 (64.998517ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-820110" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-820110
--- PASS: TestScheduledStopUnix (122.43s)

                                                
                                    
x
+
TestSkaffold (133.12s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe3643463532 version
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-138225 --memory=2600 --driver=kvm2 
E0918 20:35:31.749258   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/functional-433731/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-138225 --memory=2600 --driver=kvm2 : (48.647792722s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/KVM_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe3643463532 run --minikube-profile skaffold-138225 --kube-context skaffold-138225 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe3643463532 run --minikube-profile skaffold-138225 --kube-context skaffold-138225 --status-check=true --port-forward=false --interactive=false: (1m11.535810165s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-5955f89cc9-mlv49" [409959d9-51cc-43b9-b4bf-253932834f90] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.004577311s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-5647cbfc4f-8hhb4" [b61ef657-fe99-455e-846a-012bc19fe48e] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.004364624s
helpers_test.go:175: Cleaning up "skaffold-138225" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-138225
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-138225: (1.202363832s)
--- PASS: TestSkaffold (133.12s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (220.07s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.264165933 start -p running-upgrade-351605 --memory=2200 --vm-driver=kvm2 
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.264165933 start -p running-upgrade-351605 --memory=2200 --vm-driver=kvm2 : (2m40.872114016s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-351605 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-351605 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (57.435711144s)
helpers_test.go:175: Cleaning up "running-upgrade-351605" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-351605
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-351605: (1.215515523s)
--- PASS: TestRunningBinaryUpgrade (220.07s)

                                                
                                    
x
+
TestKubernetesUpgrade (218.7s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-924101 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2 
E0918 20:41:47.670760   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/skaffold-138225/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:41:47.677295   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/skaffold-138225/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:41:47.688820   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/skaffold-138225/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:41:47.710381   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/skaffold-138225/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:41:47.751891   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/skaffold-138225/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:41:47.833455   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/skaffold-138225/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:41:47.995089   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/skaffold-138225/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:41:48.316821   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/skaffold-138225/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:41:48.958961   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/skaffold-138225/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:41:50.240971   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/skaffold-138225/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:41:52.802755   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/skaffold-138225/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:41:57.924238   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/skaffold-138225/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:42:08.165748   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/skaffold-138225/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-924101 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2 : (1m22.173722891s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-924101
E0918 20:43:09.609101   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/skaffold-138225/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-924101: (13.332675043s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-924101 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-924101 status --format={{.Host}}: exit status 7 (74.169091ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-924101 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-924101 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2 : (49.650921771s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-924101 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-924101 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2 
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-924101 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2 : exit status 106 (114.823545ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-924101] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19667
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19667-7655/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-7655/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-924101
	    minikube start -p kubernetes-upgrade-924101 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9241012 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-924101 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-924101 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2 
E0918 20:44:11.344552   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/gvisor-909102/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:44:11.351039   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/gvisor-909102/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:44:11.362678   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/gvisor-909102/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:44:11.384147   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/gvisor-909102/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:44:11.425674   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/gvisor-909102/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:44:11.507324   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/gvisor-909102/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:44:11.668985   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/gvisor-909102/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:44:11.990717   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/gvisor-909102/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:44:12.632860   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/gvisor-909102/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:44:13.914898   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/gvisor-909102/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-924101 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2 : (1m12.043935221s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-924101" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-924101
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-924101: (1.218040148s)
--- PASS: TestKubernetesUpgrade (218.70s)

                                                
                                    
x
+
TestPause/serial/Start (109.16s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-204274 --memory=2048 --install-addons=false --wait=all --driver=kvm2 
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-204274 --memory=2048 --install-addons=false --wait=all --driver=kvm2 : (1m49.164269477s)
--- PASS: TestPause/serial/Start (109.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-268942 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-268942 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 : exit status 14 (69.290206ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-268942] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19667
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19667-7655/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-7655/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (62.91s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-268942 --driver=kvm2 
E0918 20:42:28.647715   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/skaffold-138225/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:42:28.682166   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/functional-433731/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:42:46.914701   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/addons-656419/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-268942 --driver=kvm2 : (1m2.621123664s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-268942 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (62.91s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (49.94s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-204274 --alsologtostderr -v=1 --driver=kvm2 
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-204274 --alsologtostderr -v=1 --driver=kvm2 : (49.906183073s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (49.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (33.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-268942 --no-kubernetes --driver=kvm2 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-268942 --no-kubernetes --driver=kvm2 : (31.729571036s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-268942 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-268942 status -o json: exit status 2 (331.2991ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-268942","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-268942
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-268942: (1.217940899s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (33.28s)

                                                
                                    
x
+
TestPause/serial/Pause (0.65s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-204274 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.65s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.26s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-204274 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-204274 --output=json --layout=cluster: exit status 2 (261.166797ms)

                                                
                                                
-- stdout --
	{"Name":"pause-204274","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 14 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-204274","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.26s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.62s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-204274 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.62s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.81s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-204274 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.81s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.06s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-204274 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-204274 --alsologtostderr -v=5: (1.059902245s)
--- PASS: TestPause/serial/DeletePaused (1.06s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (14.37s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (14.367296108s)
--- PASS: TestPause/serial/VerifyDeletedResources (14.37s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.47s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.47s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (185.3s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3184614505 start -p stopped-upgrade-486604 --memory=2200 --vm-driver=kvm2 
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3184614505 start -p stopped-upgrade-486604 --memory=2200 --vm-driver=kvm2 : (1m13.447398622s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3184614505 -p stopped-upgrade-486604 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3184614505 -p stopped-upgrade-486604 stop: (13.204034188s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-486604 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
E0918 20:45:33.284433   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/gvisor-909102/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-486604 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (1m38.648970671s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (185.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (51.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-268942 --no-kubernetes --driver=kvm2 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-268942 --no-kubernetes --driver=kvm2 : (51.186521679s)
--- PASS: TestNoKubernetes/serial/Start (51.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (104.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-096594 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 
E0918 20:44:21.599439   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/gvisor-909102/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:44:31.531456   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/skaffold-138225/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:44:31.841422   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/gvisor-909102/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:44:52.322959   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/gvisor-909102/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-096594 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 : (1m44.807587836s)
--- PASS: TestNetworkPlugins/group/auto/Start (104.81s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-268942 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-268942 "sudo systemctl is-active --quiet service kubelet": exit status 1 (225.833629ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (19.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (15.133650712s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (3.954478037s)
--- PASS: TestNoKubernetes/serial/ProfileList (19.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-268942
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-268942: (2.301281515s)
--- PASS: TestNoKubernetes/serial/Stop (2.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (32.64s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-268942 --driver=kvm2 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-268942 --driver=kvm2 : (32.638407415s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (32.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (111.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-096594 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-096594 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 : (1m51.400927736s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (111.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-268942 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-268942 "sudo systemctl is-active --quiet service kubelet": exit status 1 (204.474054ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (139.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-096594 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-096594 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 : (2m19.949669665s)
--- PASS: TestNetworkPlugins/group/calico/Start (139.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-096594 "pgrep -a kubelet"
I0918 20:46:01.922764   14866 config.go:182] Loaded profile config "auto-096594": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-096594 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-gkbjp" [3fbabf46-7c24-423b-b95c-5cd50bf23adf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-gkbjp" [3fbabf46-7c24-423b-b95c-5cd50bf23adf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.004313874s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-096594 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-096594 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-096594 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (111.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-096594 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 
E0918 20:46:47.670737   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/skaffold-138225/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:46:55.206866   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/gvisor-909102/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-096594 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 : (1m51.694941363s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (111.70s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.34s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-486604
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-486604: (1.33516729s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (129.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-096594 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-096594 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 : (2m9.711418754s)
--- PASS: TestNetworkPlugins/group/false/Start (129.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-9m8kp" [357fa4df-387f-45bb-8b95-f114492120ac] Running
E0918 20:47:15.373742   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/skaffold-138225/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005319812s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-096594 "pgrep -a kubelet"
I0918 20:47:19.525134   14866 config.go:182] Loaded profile config "kindnet-096594": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-096594 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-h7m7w" [c9ef809d-67a9-4a8c-a8d0-43e11969d736] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-h7m7w" [c9ef809d-67a9-4a8c-a8d0-43e11969d736] Running
E0918 20:47:28.681552   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/functional-433731/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:47:29.984559   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/addons-656419/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.00491927s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-096594 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-096594 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-096594 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (201.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-096594 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-096594 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 : (3m21.372488454s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (201.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-htds6" [661adedc-2352-4401-9c88-bafa304cdbdf] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.007025801s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-096594 "pgrep -a kubelet"
I0918 20:48:17.500213   14866 config.go:182] Loaded profile config "calico-096594": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-096594 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-2v67w" [34378204-2890-4262-b2cd-ef71804cb554] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-2v67w" [34378204-2890-4262-b2cd-ef71804cb554] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.005103468s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-096594 "pgrep -a kubelet"
I0918 20:48:22.958279   14866 config.go:182] Loaded profile config "custom-flannel-096594": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-096594 replace --force -f testdata/netcat-deployment.yaml
I0918 20:48:23.266911   14866 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-ptl76" [d3e501ff-53e0-4212-a1f6-cf3655422b8f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-ptl76" [d3e501ff-53e0-4212-a1f6-cf3655422b8f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.00589308s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-096594 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-096594 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-096594 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-096594 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-096594 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-096594 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (79.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-096594 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-096594 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 : (1m19.438377835s)
--- PASS: TestNetworkPlugins/group/flannel/Start (79.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (128.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-096594 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 
E0918 20:49:11.343182   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/gvisor-909102/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-096594 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 : (2m8.062038451s)
--- PASS: TestNetworkPlugins/group/bridge/Start (128.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-096594 "pgrep -a kubelet"
I0918 20:49:15.195149   14866 config.go:182] Loaded profile config "false-096594": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-096594 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-f885m" [77e96d0a-2215-476f-8968-d2d0b2eb5903] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-f885m" [77e96d0a-2215-476f-8968-d2d0b2eb5903] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 10.007164843s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-096594 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-096594 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-096594 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (109.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-096594 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-096594 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 : (1m49.951319907s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (109.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-jvzbw" [2034ae5b-04f2-47a6-aaf1-9c9a3aa90354] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005526248s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-096594 "pgrep -a kubelet"
I0918 20:50:16.266002   14866 config.go:182] Loaded profile config "flannel-096594": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-096594 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-sd9x8" [10f0e95b-8def-47fc-8e61-493ef5baa939] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-sd9x8" [10f0e95b-8def-47fc-8e61-493ef5baa939] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.006586598s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-096594 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-096594 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-096594 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (139.73s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-252613 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-252613 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0: (2m19.725496868s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (139.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-096594 "pgrep -a kubelet"
I0918 20:51:02.065462   14866 config.go:182] Loaded profile config "bridge-096594": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-096594 replace --force -f testdata/netcat-deployment.yaml
E0918 20:51:02.136178   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/auto-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:51:02.149042   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/auto-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:51:02.163602   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/auto-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:51:02.185069   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/auto-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:51:02.229151   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/auto-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:51:02.310929   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/auto-096594/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-5llt8" [bfb2d35f-8fb0-47c4-b8b2-1954dadb7923] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0918 20:51:02.472406   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/auto-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:51:02.793742   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/auto-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:51:03.435718   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/auto-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:51:04.717086   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/auto-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:51:07.279030   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/auto-096594/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-5llt8" [bfb2d35f-8fb0-47c4-b8b2-1954dadb7923] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.013145544s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-096594 "pgrep -a kubelet"
E0918 20:51:12.400975   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/auto-096594/client.crt: no such file or directory" logger="UnhandledError"
I0918 20:51:12.550363   14866 config.go:182] Loaded profile config "enable-default-cni-096594": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-096594 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-68l7c" [ce0d93ca-e7f2-40cc-a19c-5349c38cd8a6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-68l7c" [ce0d93ca-e7f2-40cc-a19c-5349c38cd8a6] Running
E0918 20:51:22.642738   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/auto-096594/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.007121621s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-096594 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-096594 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-096594 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-096594 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-096594 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-096594 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (86.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-097693 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-097693 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.31.1: (1m26.287259373s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (86.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-096594 "pgrep -a kubelet"
I0918 20:51:33.919496   14866 config.go:182] Loaded profile config "kubenet-096594": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (12.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-096594 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-gvjt6" [b9950346-5f36-4479-9d61-6875c96e6944] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-gvjt6" [b9950346-5f36-4479-9d61-6875c96e6944] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 12.004784974s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (12.88s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (123.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-784000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-784000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.31.1: (2m3.252938261s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (123.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-096594 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-096594 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-096594 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.16s)
E0918 20:59:43.156434   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/false-096594/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (136.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-339588 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.31.1
E0918 20:52:11.750568   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/functional-433731/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:52:13.308069   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/kindnet-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:52:13.314643   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/kindnet-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:52:13.328876   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/kindnet-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:52:13.350874   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/kindnet-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:52:13.392496   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/kindnet-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:52:13.474120   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/kindnet-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:52:13.635627   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/kindnet-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:52:13.957948   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/kindnet-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:52:14.599465   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/kindnet-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:52:15.880859   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/kindnet-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:52:18.443043   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/kindnet-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:52:23.565283   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/kindnet-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:52:24.086976   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/auto-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:52:28.681854   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/functional-433731/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:52:33.806775   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/kindnet-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:52:46.914690   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/addons-656419/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:52:54.288255   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/kindnet-096594/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-339588 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.31.1: (2m16.316132947s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (136.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-097693 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b5d7aba1-2e3f-4dd1-b301-bfe84f6646fd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [b5d7aba1-2e3f-4dd1-b301-bfe84f6646fd] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.005096195s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-097693 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.63s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-252613 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d18d0b69-c8a6-424f-8ba9-014c3b09ba28] Pending
helpers_test.go:344: "busybox" [d18d0b69-c8a6-424f-8ba9-014c3b09ba28] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d18d0b69-c8a6-424f-8ba9-014c3b09ba28] Running
E0918 20:53:11.911263   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/calico-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:53:12.552614   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/calico-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:53:13.834084   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/calico-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:53:16.396187   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/calico-096594/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.005152963s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-252613 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.63s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-097693 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-097693 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.084084289s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-097693 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (14.41s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-097693 --alsologtostderr -v=3
E0918 20:53:11.264929   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/calico-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:53:11.271463   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/calico-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:53:11.282877   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/calico-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:53:11.304381   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/calico-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:53:11.346063   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/calico-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:53:11.427664   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/calico-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:53:11.589296   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/calico-096594/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-097693 --alsologtostderr -v=3: (14.405702008s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (14.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (3.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-252613 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-252613 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.988609004s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-252613 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (3.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-252613 --alsologtostderr -v=3
E0918 20:53:21.517991   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/calico-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:53:23.254442   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/custom-flannel-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:53:23.262118   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/custom-flannel-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:53:23.273861   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/custom-flannel-096594/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-252613 --alsologtostderr -v=3: (13.45439107s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.45s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-097693 -n no-preload-097693
E0918 20:53:23.295787   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/custom-flannel-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:53:23.337133   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/custom-flannel-096594/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-097693 -n no-preload-097693: exit status 7 (77.716949ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-097693 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0918 20:53:23.418858   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/custom-flannel-096594/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (308.69s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-097693 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.31.1
E0918 20:53:23.580363   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/custom-flannel-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:53:23.901930   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/custom-flannel-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:53:24.544199   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/custom-flannel-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:53:25.826296   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/custom-flannel-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:53:28.388221   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/custom-flannel-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:53:31.760344   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/calico-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:53:33.509793   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/custom-flannel-096594/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-097693 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.31.1: (5m8.389767587s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-097693 -n no-preload-097693
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (308.69s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-252613 -n old-k8s-version-252613
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-252613 -n old-k8s-version-252613: exit status 7 (85.710429ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-252613 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (546.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-252613 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0
E0918 20:53:35.249809   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/kindnet-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:53:43.752045   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/custom-flannel-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:53:46.008709   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/auto-096594/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-252613 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0: (9m6.646153739s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-252613 -n old-k8s-version-252613
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (546.92s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-784000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a4623b7c-a4d1-493c-a0a6-aa177a5a9ff0] Pending
helpers_test.go:344: "busybox" [a4623b7c-a4d1-493c-a0a6-aa177a5a9ff0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [a4623b7c-a4d1-493c-a0a6-aa177a5a9ff0] Running
E0918 20:53:52.242129   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/calico-096594/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.005163636s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-784000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-784000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-784000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.116395774s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-784000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (13.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-784000 --alsologtostderr -v=3
E0918 20:54:04.234207   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/custom-flannel-096594/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-784000 --alsologtostderr -v=3: (13.400371638s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (13.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-784000 -n embed-certs-784000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-784000 -n embed-certs-784000: exit status 7 (75.749561ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-784000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (304.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-784000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.31.1
E0918 20:54:11.343390   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/gvisor-909102/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:54:15.452379   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/false-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:54:15.458888   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/false-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:54:15.470392   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/false-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:54:15.491889   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/false-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:54:15.533473   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/false-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:54:15.614987   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/false-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:54:15.777018   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/false-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:54:16.098845   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/false-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:54:16.740975   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/false-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:54:18.023135   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/false-096594/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-784000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.31.1: (5m3.934663291s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-784000 -n embed-certs-784000
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (304.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.75s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-339588 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [13428951-c063-4c72-accd-6d68349a78bb] Pending
E0918 20:54:20.584552   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/false-096594/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [13428951-c063-4c72-accd-6d68349a78bb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [13428951-c063-4c72-accd-6d68349a78bb] Running
E0918 20:54:25.706180   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/false-096594/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.007967754s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-339588 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.75s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-339588 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-339588 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.026247631s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-339588 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (13.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-339588 --alsologtostderr -v=3
E0918 20:54:33.204025   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/calico-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:54:35.948532   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/false-096594/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-339588 --alsologtostderr -v=3: (13.356313267s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (13.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-339588 -n default-k8s-diff-port-339588
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-339588 -n default-k8s-diff-port-339588: exit status 7 (73.144381ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-339588 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (316.94s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-339588 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.31.1
E0918 20:54:45.196560   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/custom-flannel-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:54:56.429860   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/false-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:54:57.172190   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/kindnet-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:55:10.033227   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/flannel-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:55:10.039658   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/flannel-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:55:10.051132   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/flannel-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:55:10.072613   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/flannel-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:55:10.114778   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/flannel-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:55:10.196481   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/flannel-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:55:10.358404   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/flannel-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:55:10.680427   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/flannel-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:55:11.322363   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/flannel-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:55:12.604116   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/flannel-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:55:15.165416   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/flannel-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:55:20.287385   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/flannel-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:55:30.529356   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/flannel-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:55:37.391984   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/false-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:55:51.010693   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/flannel-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:55:55.125623   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/calico-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:56:02.136267   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/auto-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:56:02.433019   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/bridge-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:56:02.439537   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/bridge-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:56:02.451092   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/bridge-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:56:02.472558   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/bridge-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:56:02.514129   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/bridge-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:56:02.595814   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/bridge-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:56:02.757347   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/bridge-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:56:03.079320   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/bridge-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:56:03.721534   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/bridge-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:56:05.003225   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/bridge-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:56:07.117863   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/custom-flannel-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:56:07.565429   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/bridge-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:56:12.687437   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/bridge-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:56:12.787092   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/enable-default-cni-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:56:12.793568   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/enable-default-cni-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:56:12.805042   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/enable-default-cni-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:56:12.826502   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/enable-default-cni-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:56:12.867911   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/enable-default-cni-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:56:12.949399   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/enable-default-cni-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:56:13.110953   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/enable-default-cni-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:56:13.432741   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/enable-default-cni-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:56:14.074749   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/enable-default-cni-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:56:15.356970   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/enable-default-cni-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:56:17.918488   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/enable-default-cni-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:56:22.929242   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/bridge-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:56:23.040121   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/enable-default-cni-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:56:29.851151   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/auto-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:56:31.973019   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/flannel-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:56:33.281801   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/enable-default-cni-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:56:34.787553   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/kubenet-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:56:34.794039   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/kubenet-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:56:34.805451   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/kubenet-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:56:34.826874   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/kubenet-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:56:34.868396   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/kubenet-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:56:34.949886   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/kubenet-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:56:35.111784   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/kubenet-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:56:35.433545   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/kubenet-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:56:36.075581   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/kubenet-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:56:37.357060   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/kubenet-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:56:39.919088   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/kubenet-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:56:43.410649   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/bridge-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:56:45.041140   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/kubenet-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:56:47.671042   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/skaffold-138225/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:56:53.763268   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/enable-default-cni-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:56:55.283497   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/kubenet-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:56:59.314154   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/false-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:57:13.308553   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/kindnet-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:57:15.765281   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/kubenet-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:57:24.372295   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/bridge-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:57:28.681533   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/functional-433731/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:57:34.725566   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/enable-default-cni-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:57:41.013815   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/kindnet-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:57:46.914850   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/addons-656419/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:57:53.895270   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/flannel-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:57:56.727336   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/kubenet-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:58:10.735839   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/skaffold-138225/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:58:11.263834   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/calico-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:58:23.253838   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/custom-flannel-096594/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-339588 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.31.1: (5m16.621390152s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-339588 -n default-k8s-diff-port-339588
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (316.94s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (10.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-w888d" [5485f9c1-ea0c-4eaa-b5dc-a8814a13f954] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-695b96c756-w888d" [5485f9c1-ea0c-4eaa-b5dc-a8814a13f954] Running
E0918 20:58:38.967389   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/calico-096594/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.005080656s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (10.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-w888d" [5485f9c1-ea0c-4eaa-b5dc-a8814a13f954] Running
E0918 20:58:46.294310   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/bridge-096594/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005435994s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-097693 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-097693 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.64s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-097693 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-097693 -n no-preload-097693
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-097693 -n no-preload-097693: exit status 2 (259.906795ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-097693 -n no-preload-097693
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-097693 -n no-preload-097693: exit status 2 (266.45641ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-097693 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-097693 -n no-preload-097693
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-097693 -n no-preload-097693
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.64s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (66.57s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-562502 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.31.1
E0918 20:58:56.646904   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/enable-default-cni-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:59:11.343752   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/gvisor-909102/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-562502 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.31.1: (1m6.567258327s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (66.57s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-bnwcw" [bd9251df-3212-4417-8002-95bd2716e163] Running
E0918 20:59:15.451734   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/false-096594/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:59:18.648663   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/kubenet-096594/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005899907s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-bnwcw" [bd9251df-3212-4417-8002-95bd2716e163] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006785091s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-784000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-784000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-784000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-784000 -n embed-certs-784000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-784000 -n embed-certs-784000: exit status 2 (298.394381ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-784000 -n embed-certs-784000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-784000 -n embed-certs-784000: exit status 2 (299.30768ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-784000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-784000 -n embed-certs-784000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-784000 -n embed-certs-784000
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.94s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-562502 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-562502 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.058650473s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (8.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-562502 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-562502 --alsologtostderr -v=3: (8.396612482s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (8.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (8.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-pxs6q" [e201f77f-8869-4af2-a081-67495229c3a4] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-695b96c756-pxs6q" [e201f77f-8869-4af2-a081-67495229c3a4] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 8.004978204s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (8.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-562502 -n newest-cni-562502
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-562502 -n newest-cni-562502: exit status 7 (76.752414ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-562502 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (40.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-562502 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-562502 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.31.1: (40.005334834s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-562502 -n newest-cni-562502
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (40.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-pxs6q" [e201f77f-8869-4af2-a081-67495229c3a4] Running
E0918 21:00:10.032966   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/flannel-096594/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00527913s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-339588 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-339588 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.66s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-339588 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-339588 -n default-k8s-diff-port-339588
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-339588 -n default-k8s-diff-port-339588: exit status 2 (260.772623ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-339588 -n default-k8s-diff-port-339588
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-339588 -n default-k8s-diff-port-339588: exit status 2 (267.224598ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-339588 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-339588 -n default-k8s-diff-port-339588
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-339588 -n default-k8s-diff-port-339588
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.66s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-562502 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.85s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-562502 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-562502 -n newest-cni-562502
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-562502 -n newest-cni-562502: exit status 2 (282.57603ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-562502 -n newest-cni-562502
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-562502 -n newest-cni-562502: exit status 2 (261.503254ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-562502 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-562502 -n newest-cni-562502
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-562502 -n newest-cni-562502
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.85s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-bjvtl" [537cc5a9-053a-48a7-bb8a-23b9557fdd96] Running
E0918 21:02:46.915375   14866 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7655/.minikube/profiles/addons-656419/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004698192s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-bjvtl" [537cc5a9-053a-48a7-bb8a-23b9557fdd96] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005189769s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-252613 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-252613 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-252613 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-252613 -n old-k8s-version-252613
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-252613 -n old-k8s-version-252613: exit status 2 (260.090444ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-252613 -n old-k8s-version-252613
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-252613 -n old-k8s-version-252613: exit status 2 (259.736422ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-252613 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-252613 -n old-k8s-version-252613
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-252613 -n old-k8s-version-252613
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.55s)

                                                
                                    

Test skip (31/341)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-096594 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-096594

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-096594

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-096594

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-096594

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-096594

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-096594

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-096594

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-096594

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-096594

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-096594

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-096594" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-096594"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-096594" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-096594"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-096594" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-096594"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-096594

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-096594" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-096594"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-096594" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-096594"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-096594" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-096594" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-096594" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-096594" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-096594" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-096594" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-096594" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-096594" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-096594" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-096594"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-096594" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-096594"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-096594" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-096594"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-096594" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-096594"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-096594" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-096594"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-096594

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-096594

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-096594" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-096594" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-096594

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-096594

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-096594" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-096594" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-096594" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-096594" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-096594" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-096594" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-096594"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-096594" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-096594"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-096594" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-096594"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-096594" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-096594"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-096594" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-096594"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-096594

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-096594" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-096594"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-096594" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-096594"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-096594" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-096594"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-096594" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-096594"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-096594" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-096594"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-096594" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-096594"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-096594" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-096594"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-096594" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-096594"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-096594" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-096594"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-096594" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-096594"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-096594" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-096594"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-096594" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-096594"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-096594" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-096594"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-096594" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-096594"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-096594" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-096594"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-096594" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-096594"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-096594" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-096594"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-096594" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-096594"

                                                
                                                
----------------------- debugLogs end: cilium-096594 [took: 3.528688781s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-096594" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-096594
--- SKIP: TestNetworkPlugins/group/cilium (3.68s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-801381" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-801381
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
Copied to clipboard