Test Report: KVM_Linux 19598

                    
                      cb70ad94d69a229bf8d3511a5a00af396fa2386e:2024-09-10:36157
                    
                

Test fail (1/341)

Order failed test Duration
33 TestAddons/parallel/Registry 73.47
x
+
TestAddons/parallel/Registry (73.47s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 2.936614ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-vdrtp" [85b87341-00c1-4bec-876a-9eabfeb2cb35] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004104609s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-vktjt" [8a998f90-a892-4121-b82b-dbe047da7b63] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003870342s
addons_test.go:342: (dbg) Run:  kubectl --context addons-447248 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-447248 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-447248 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.072067136s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-447248 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-447248 ip
2024/09/10 17:43:01 [DEBUG] GET http://192.168.39.59:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-447248 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-447248 -n addons-447248
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-447248 logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-794328                                                                     | download-only-794328 | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC | 10 Sep 24 17:29 UTC |
	| delete  | -p download-only-418478                                                                     | download-only-418478 | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC | 10 Sep 24 17:29 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-654438 | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC |                     |
	|         | binary-mirror-654438                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:33519                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-654438                                                                     | binary-mirror-654438 | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC | 10 Sep 24 17:29 UTC |
	| addons  | disable dashboard -p                                                                        | addons-447248        | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC |                     |
	|         | addons-447248                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-447248        | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC |                     |
	|         | addons-447248                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-447248 --wait=true                                                                | addons-447248        | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC | 10 Sep 24 17:33 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2  --addons=ingress                                                             |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-447248 addons disable                                                                | addons-447248        | jenkins | v1.34.0 | 10 Sep 24 17:33 UTC | 10 Sep 24 17:33 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-447248 addons                                                                        | addons-447248        | jenkins | v1.34.0 | 10 Sep 24 17:41 UTC | 10 Sep 24 17:41 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-447248 ssh cat                                                                       | addons-447248        | jenkins | v1.34.0 | 10 Sep 24 17:42 UTC | 10 Sep 24 17:42 UTC |
	|         | /opt/local-path-provisioner/pvc-75c4d344-3c02-4cb9-bec5-00ae17ac00c0_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-447248 addons disable                                                                | addons-447248        | jenkins | v1.34.0 | 10 Sep 24 17:42 UTC | 10 Sep 24 17:42 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | addons-447248 addons disable                                                                | addons-447248        | jenkins | v1.34.0 | 10 Sep 24 17:42 UTC | 10 Sep 24 17:42 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-447248        | jenkins | v1.34.0 | 10 Sep 24 17:42 UTC | 10 Sep 24 17:42 UTC |
	|         | -p addons-447248                                                                            |                      |         |         |                     |                     |
	| addons  | addons-447248 addons disable                                                                | addons-447248        | jenkins | v1.34.0 | 10 Sep 24 17:42 UTC | 10 Sep 24 17:42 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-447248 ssh curl -s                                                                   | addons-447248        | jenkins | v1.34.0 | 10 Sep 24 17:42 UTC | 10 Sep 24 17:42 UTC |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ip      | addons-447248 ip                                                                            | addons-447248        | jenkins | v1.34.0 | 10 Sep 24 17:42 UTC | 10 Sep 24 17:42 UTC |
	| addons  | addons-447248 addons disable                                                                | addons-447248        | jenkins | v1.34.0 | 10 Sep 24 17:42 UTC | 10 Sep 24 17:42 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-447248 addons disable                                                                | addons-447248        | jenkins | v1.34.0 | 10 Sep 24 17:42 UTC | 10 Sep 24 17:42 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-447248 addons                                                                        | addons-447248        | jenkins | v1.34.0 | 10 Sep 24 17:42 UTC | 10 Sep 24 17:42 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-447248        | jenkins | v1.34.0 | 10 Sep 24 17:42 UTC | 10 Sep 24 17:42 UTC |
	|         | addons-447248                                                                               |                      |         |         |                     |                     |
	| addons  | addons-447248 addons                                                                        | addons-447248        | jenkins | v1.34.0 | 10 Sep 24 17:42 UTC | 10 Sep 24 17:42 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-447248        | jenkins | v1.34.0 | 10 Sep 24 17:42 UTC | 10 Sep 24 17:42 UTC |
	|         | -p addons-447248                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-447248        | jenkins | v1.34.0 | 10 Sep 24 17:42 UTC | 10 Sep 24 17:42 UTC |
	|         | addons-447248                                                                               |                      |         |         |                     |                     |
	| ip      | addons-447248 ip                                                                            | addons-447248        | jenkins | v1.34.0 | 10 Sep 24 17:43 UTC | 10 Sep 24 17:43 UTC |
	| addons  | addons-447248 addons disable                                                                | addons-447248        | jenkins | v1.34.0 | 10 Sep 24 17:43 UTC | 10 Sep 24 17:43 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/10 17:29:25
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0910 17:29:25.298507   13790 out.go:345] Setting OutFile to fd 1 ...
	I0910 17:29:25.298669   13790 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:29:25.298680   13790 out.go:358] Setting ErrFile to fd 2...
	I0910 17:29:25.298686   13790 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:29:25.298890   13790 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-5970/.minikube/bin
	I0910 17:29:25.299527   13790 out.go:352] Setting JSON to false
	I0910 17:29:25.300385   13790 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":714,"bootTime":1725988651,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0910 17:29:25.300446   13790 start.go:139] virtualization: kvm guest
	I0910 17:29:25.302817   13790 out.go:177] * [addons-447248] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0910 17:29:25.304013   13790 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 17:29:25.304038   13790 notify.go:220] Checking for updates...
	I0910 17:29:25.306374   13790 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 17:29:25.307757   13790 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19598-5970/kubeconfig
	I0910 17:29:25.309097   13790 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-5970/.minikube
	I0910 17:29:25.310266   13790 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0910 17:29:25.311540   13790 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 17:29:25.312835   13790 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 17:29:25.345072   13790 out.go:177] * Using the kvm2 driver based on user configuration
	I0910 17:29:25.346028   13790 start.go:297] selected driver: kvm2
	I0910 17:29:25.346041   13790 start.go:901] validating driver "kvm2" against <nil>
	I0910 17:29:25.346051   13790 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 17:29:25.346733   13790 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 17:29:25.346797   13790 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19598-5970/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0910 17:29:25.361539   13790 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0910 17:29:25.361608   13790 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0910 17:29:25.361835   13790 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 17:29:25.361906   13790 cni.go:84] Creating CNI manager for ""
	I0910 17:29:25.361927   13790 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0910 17:29:25.361941   13790 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0910 17:29:25.362011   13790 start.go:340] cluster config:
	{Name:addons-447248 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-447248 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:d
ocker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 17:29:25.362114   13790 iso.go:125] acquiring lock: {Name:mk102d590109224a2b8dd000e4c8f825ff8b3e36 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 17:29:25.364036   13790 out.go:177] * Starting "addons-447248" primary control-plane node in "addons-447248" cluster
	I0910 17:29:25.365270   13790 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0910 17:29:25.365304   13790 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19598-5970/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0910 17:29:25.365315   13790 cache.go:56] Caching tarball of preloaded images
	I0910 17:29:25.365397   13790 preload.go:172] Found /home/jenkins/minikube-integration/19598-5970/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0910 17:29:25.365411   13790 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0910 17:29:25.365708   13790 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/addons-447248/config.json ...
	I0910 17:29:25.365733   13790 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/addons-447248/config.json: {Name:mk2aab535340725a44db4899c03ec317ad02bdf6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:29:25.365884   13790 start.go:360] acquireMachinesLock for addons-447248: {Name:mk389645d6ba45c0fe83d880173fe4460352b2d2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 17:29:25.365948   13790 start.go:364] duration metric: took 48.347µs to acquireMachinesLock for "addons-447248"
	I0910 17:29:25.365970   13790 start.go:93] Provisioning new machine with config: &{Name:addons-447248 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:addons-447248 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 17:29:25.366050   13790 start.go:125] createHost starting for "" (driver="kvm2")
	I0910 17:29:25.367680   13790 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0910 17:29:25.367811   13790 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0910 17:29:25.367854   13790 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:29:25.382382   13790 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40409
	I0910 17:29:25.382873   13790 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:29:25.383467   13790 main.go:141] libmachine: Using API Version  1
	I0910 17:29:25.383493   13790 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:29:25.383851   13790 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:29:25.384016   13790 main.go:141] libmachine: (addons-447248) Calling .GetMachineName
	I0910 17:29:25.384229   13790 main.go:141] libmachine: (addons-447248) Calling .DriverName
	I0910 17:29:25.384392   13790 start.go:159] libmachine.API.Create for "addons-447248" (driver="kvm2")
	I0910 17:29:25.384421   13790 client.go:168] LocalClient.Create starting
	I0910 17:29:25.384463   13790 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19598-5970/.minikube/certs/ca.pem
	I0910 17:29:25.479390   13790 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19598-5970/.minikube/certs/cert.pem
	I0910 17:29:25.747528   13790 main.go:141] libmachine: Running pre-create checks...
	I0910 17:29:25.747551   13790 main.go:141] libmachine: (addons-447248) Calling .PreCreateCheck
	I0910 17:29:25.747995   13790 main.go:141] libmachine: (addons-447248) Calling .GetConfigRaw
	I0910 17:29:25.748465   13790 main.go:141] libmachine: Creating machine...
	I0910 17:29:25.748480   13790 main.go:141] libmachine: (addons-447248) Calling .Create
	I0910 17:29:25.748644   13790 main.go:141] libmachine: (addons-447248) Creating KVM machine...
	I0910 17:29:25.749899   13790 main.go:141] libmachine: (addons-447248) DBG | found existing default KVM network
	I0910 17:29:25.750670   13790 main.go:141] libmachine: (addons-447248) DBG | I0910 17:29:25.750456   13812 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001831f0}
	I0910 17:29:25.750703   13790 main.go:141] libmachine: (addons-447248) DBG | created network xml: 
	I0910 17:29:25.750720   13790 main.go:141] libmachine: (addons-447248) DBG | <network>
	I0910 17:29:25.750732   13790 main.go:141] libmachine: (addons-447248) DBG |   <name>mk-addons-447248</name>
	I0910 17:29:25.750742   13790 main.go:141] libmachine: (addons-447248) DBG |   <dns enable='no'/>
	I0910 17:29:25.750749   13790 main.go:141] libmachine: (addons-447248) DBG |   
	I0910 17:29:25.750760   13790 main.go:141] libmachine: (addons-447248) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0910 17:29:25.750769   13790 main.go:141] libmachine: (addons-447248) DBG |     <dhcp>
	I0910 17:29:25.750775   13790 main.go:141] libmachine: (addons-447248) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0910 17:29:25.750780   13790 main.go:141] libmachine: (addons-447248) DBG |     </dhcp>
	I0910 17:29:25.750806   13790 main.go:141] libmachine: (addons-447248) DBG |   </ip>
	I0910 17:29:25.750829   13790 main.go:141] libmachine: (addons-447248) DBG |   
	I0910 17:29:25.750845   13790 main.go:141] libmachine: (addons-447248) DBG | </network>
	I0910 17:29:25.750856   13790 main.go:141] libmachine: (addons-447248) DBG | 
	I0910 17:29:25.756555   13790 main.go:141] libmachine: (addons-447248) DBG | trying to create private KVM network mk-addons-447248 192.168.39.0/24...
	I0910 17:29:25.819998   13790 main.go:141] libmachine: (addons-447248) Setting up store path in /home/jenkins/minikube-integration/19598-5970/.minikube/machines/addons-447248 ...
	I0910 17:29:25.820022   13790 main.go:141] libmachine: (addons-447248) DBG | private KVM network mk-addons-447248 192.168.39.0/24 created
	I0910 17:29:25.820043   13790 main.go:141] libmachine: (addons-447248) Building disk image from file:///home/jenkins/minikube-integration/19598-5970/.minikube/cache/iso/amd64/minikube-v1.34.0-1725912912-19598-amd64.iso
	I0910 17:29:25.820066   13790 main.go:141] libmachine: (addons-447248) Downloading /home/jenkins/minikube-integration/19598-5970/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19598-5970/.minikube/cache/iso/amd64/minikube-v1.34.0-1725912912-19598-amd64.iso...
	I0910 17:29:25.820083   13790 main.go:141] libmachine: (addons-447248) DBG | I0910 17:29:25.819952   13812 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19598-5970/.minikube
	I0910 17:29:26.071410   13790 main.go:141] libmachine: (addons-447248) DBG | I0910 17:29:26.071274   13812 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19598-5970/.minikube/machines/addons-447248/id_rsa...
	I0910 17:29:26.155577   13790 main.go:141] libmachine: (addons-447248) DBG | I0910 17:29:26.155435   13812 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19598-5970/.minikube/machines/addons-447248/addons-447248.rawdisk...
	I0910 17:29:26.155609   13790 main.go:141] libmachine: (addons-447248) DBG | Writing magic tar header
	I0910 17:29:26.155623   13790 main.go:141] libmachine: (addons-447248) DBG | Writing SSH key tar header
	I0910 17:29:26.155632   13790 main.go:141] libmachine: (addons-447248) DBG | I0910 17:29:26.155549   13812 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19598-5970/.minikube/machines/addons-447248 ...
	I0910 17:29:26.155659   13790 main.go:141] libmachine: (addons-447248) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19598-5970/.minikube/machines/addons-447248
	I0910 17:29:26.155685   13790 main.go:141] libmachine: (addons-447248) Setting executable bit set on /home/jenkins/minikube-integration/19598-5970/.minikube/machines/addons-447248 (perms=drwx------)
	I0910 17:29:26.155699   13790 main.go:141] libmachine: (addons-447248) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19598-5970/.minikube/machines
	I0910 17:29:26.155711   13790 main.go:141] libmachine: (addons-447248) Setting executable bit set on /home/jenkins/minikube-integration/19598-5970/.minikube/machines (perms=drwxr-xr-x)
	I0910 17:29:26.155720   13790 main.go:141] libmachine: (addons-447248) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19598-5970/.minikube
	I0910 17:29:26.155729   13790 main.go:141] libmachine: (addons-447248) Setting executable bit set on /home/jenkins/minikube-integration/19598-5970/.minikube (perms=drwxr-xr-x)
	I0910 17:29:26.155739   13790 main.go:141] libmachine: (addons-447248) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19598-5970
	I0910 17:29:26.155752   13790 main.go:141] libmachine: (addons-447248) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0910 17:29:26.155760   13790 main.go:141] libmachine: (addons-447248) DBG | Checking permissions on dir: /home/jenkins
	I0910 17:29:26.155773   13790 main.go:141] libmachine: (addons-447248) DBG | Checking permissions on dir: /home
	I0910 17:29:26.155784   13790 main.go:141] libmachine: (addons-447248) DBG | Skipping /home - not owner
	I0910 17:29:26.155796   13790 main.go:141] libmachine: (addons-447248) Setting executable bit set on /home/jenkins/minikube-integration/19598-5970 (perms=drwxrwxr-x)
	I0910 17:29:26.155811   13790 main.go:141] libmachine: (addons-447248) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0910 17:29:26.155829   13790 main.go:141] libmachine: (addons-447248) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0910 17:29:26.155839   13790 main.go:141] libmachine: (addons-447248) Creating domain...
	I0910 17:29:26.156806   13790 main.go:141] libmachine: (addons-447248) define libvirt domain using xml: 
	I0910 17:29:26.156840   13790 main.go:141] libmachine: (addons-447248) <domain type='kvm'>
	I0910 17:29:26.156853   13790 main.go:141] libmachine: (addons-447248)   <name>addons-447248</name>
	I0910 17:29:26.156865   13790 main.go:141] libmachine: (addons-447248)   <memory unit='MiB'>4000</memory>
	I0910 17:29:26.156873   13790 main.go:141] libmachine: (addons-447248)   <vcpu>2</vcpu>
	I0910 17:29:26.156881   13790 main.go:141] libmachine: (addons-447248)   <features>
	I0910 17:29:26.156890   13790 main.go:141] libmachine: (addons-447248)     <acpi/>
	I0910 17:29:26.156900   13790 main.go:141] libmachine: (addons-447248)     <apic/>
	I0910 17:29:26.156912   13790 main.go:141] libmachine: (addons-447248)     <pae/>
	I0910 17:29:26.156918   13790 main.go:141] libmachine: (addons-447248)     
	I0910 17:29:26.156928   13790 main.go:141] libmachine: (addons-447248)   </features>
	I0910 17:29:26.156939   13790 main.go:141] libmachine: (addons-447248)   <cpu mode='host-passthrough'>
	I0910 17:29:26.156985   13790 main.go:141] libmachine: (addons-447248)   
	I0910 17:29:26.157020   13790 main.go:141] libmachine: (addons-447248)   </cpu>
	I0910 17:29:26.157044   13790 main.go:141] libmachine: (addons-447248)   <os>
	I0910 17:29:26.157064   13790 main.go:141] libmachine: (addons-447248)     <type>hvm</type>
	I0910 17:29:26.157080   13790 main.go:141] libmachine: (addons-447248)     <boot dev='cdrom'/>
	I0910 17:29:26.157097   13790 main.go:141] libmachine: (addons-447248)     <boot dev='hd'/>
	I0910 17:29:26.157108   13790 main.go:141] libmachine: (addons-447248)     <bootmenu enable='no'/>
	I0910 17:29:26.157116   13790 main.go:141] libmachine: (addons-447248)   </os>
	I0910 17:29:26.157128   13790 main.go:141] libmachine: (addons-447248)   <devices>
	I0910 17:29:26.157140   13790 main.go:141] libmachine: (addons-447248)     <disk type='file' device='cdrom'>
	I0910 17:29:26.157158   13790 main.go:141] libmachine: (addons-447248)       <source file='/home/jenkins/minikube-integration/19598-5970/.minikube/machines/addons-447248/boot2docker.iso'/>
	I0910 17:29:26.157174   13790 main.go:141] libmachine: (addons-447248)       <target dev='hdc' bus='scsi'/>
	I0910 17:29:26.157187   13790 main.go:141] libmachine: (addons-447248)       <readonly/>
	I0910 17:29:26.157197   13790 main.go:141] libmachine: (addons-447248)     </disk>
	I0910 17:29:26.157211   13790 main.go:141] libmachine: (addons-447248)     <disk type='file' device='disk'>
	I0910 17:29:26.157225   13790 main.go:141] libmachine: (addons-447248)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0910 17:29:26.157253   13790 main.go:141] libmachine: (addons-447248)       <source file='/home/jenkins/minikube-integration/19598-5970/.minikube/machines/addons-447248/addons-447248.rawdisk'/>
	I0910 17:29:26.157268   13790 main.go:141] libmachine: (addons-447248)       <target dev='hda' bus='virtio'/>
	I0910 17:29:26.157275   13790 main.go:141] libmachine: (addons-447248)     </disk>
	I0910 17:29:26.157282   13790 main.go:141] libmachine: (addons-447248)     <interface type='network'>
	I0910 17:29:26.157296   13790 main.go:141] libmachine: (addons-447248)       <source network='mk-addons-447248'/>
	I0910 17:29:26.157307   13790 main.go:141] libmachine: (addons-447248)       <model type='virtio'/>
	I0910 17:29:26.157318   13790 main.go:141] libmachine: (addons-447248)     </interface>
	I0910 17:29:26.157329   13790 main.go:141] libmachine: (addons-447248)     <interface type='network'>
	I0910 17:29:26.157341   13790 main.go:141] libmachine: (addons-447248)       <source network='default'/>
	I0910 17:29:26.157355   13790 main.go:141] libmachine: (addons-447248)       <model type='virtio'/>
	I0910 17:29:26.157364   13790 main.go:141] libmachine: (addons-447248)     </interface>
	I0910 17:29:26.157370   13790 main.go:141] libmachine: (addons-447248)     <serial type='pty'>
	I0910 17:29:26.157382   13790 main.go:141] libmachine: (addons-447248)       <target port='0'/>
	I0910 17:29:26.157392   13790 main.go:141] libmachine: (addons-447248)     </serial>
	I0910 17:29:26.157404   13790 main.go:141] libmachine: (addons-447248)     <console type='pty'>
	I0910 17:29:26.157435   13790 main.go:141] libmachine: (addons-447248)       <target type='serial' port='0'/>
	I0910 17:29:26.157445   13790 main.go:141] libmachine: (addons-447248)     </console>
	I0910 17:29:26.157450   13790 main.go:141] libmachine: (addons-447248)     <rng model='virtio'>
	I0910 17:29:26.157459   13790 main.go:141] libmachine: (addons-447248)       <backend model='random'>/dev/random</backend>
	I0910 17:29:26.157469   13790 main.go:141] libmachine: (addons-447248)     </rng>
	I0910 17:29:26.157481   13790 main.go:141] libmachine: (addons-447248)     
	I0910 17:29:26.157494   13790 main.go:141] libmachine: (addons-447248)     
	I0910 17:29:26.157505   13790 main.go:141] libmachine: (addons-447248)   </devices>
	I0910 17:29:26.157515   13790 main.go:141] libmachine: (addons-447248) </domain>
	I0910 17:29:26.157525   13790 main.go:141] libmachine: (addons-447248) 
	I0910 17:29:26.163259   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined MAC address 52:54:00:80:e1:7b in network default
	I0910 17:29:26.163747   13790 main.go:141] libmachine: (addons-447248) Ensuring networks are active...
	I0910 17:29:26.163774   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:29:26.164425   13790 main.go:141] libmachine: (addons-447248) Ensuring network default is active
	I0910 17:29:26.164722   13790 main.go:141] libmachine: (addons-447248) Ensuring network mk-addons-447248 is active
	I0910 17:29:26.165162   13790 main.go:141] libmachine: (addons-447248) Getting domain xml...
	I0910 17:29:26.165851   13790 main.go:141] libmachine: (addons-447248) Creating domain...
	I0910 17:29:27.594197   13790 main.go:141] libmachine: (addons-447248) Waiting to get IP...
	I0910 17:29:27.594914   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:29:27.595490   13790 main.go:141] libmachine: (addons-447248) DBG | unable to find current IP address of domain addons-447248 in network mk-addons-447248
	I0910 17:29:27.595536   13790 main.go:141] libmachine: (addons-447248) DBG | I0910 17:29:27.595456   13812 retry.go:31] will retry after 201.560637ms: waiting for machine to come up
	I0910 17:29:27.799072   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:29:27.799553   13790 main.go:141] libmachine: (addons-447248) DBG | unable to find current IP address of domain addons-447248 in network mk-addons-447248
	I0910 17:29:27.799582   13790 main.go:141] libmachine: (addons-447248) DBG | I0910 17:29:27.799499   13812 retry.go:31] will retry after 305.39472ms: waiting for machine to come up
	I0910 17:29:28.106958   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:29:28.107369   13790 main.go:141] libmachine: (addons-447248) DBG | unable to find current IP address of domain addons-447248 in network mk-addons-447248
	I0910 17:29:28.107419   13790 main.go:141] libmachine: (addons-447248) DBG | I0910 17:29:28.107339   13812 retry.go:31] will retry after 296.906209ms: waiting for machine to come up
	I0910 17:29:28.405749   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:29:28.406180   13790 main.go:141] libmachine: (addons-447248) DBG | unable to find current IP address of domain addons-447248 in network mk-addons-447248
	I0910 17:29:28.406226   13790 main.go:141] libmachine: (addons-447248) DBG | I0910 17:29:28.406184   13812 retry.go:31] will retry after 464.405232ms: waiting for machine to come up
	I0910 17:29:28.871700   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:29:28.872195   13790 main.go:141] libmachine: (addons-447248) DBG | unable to find current IP address of domain addons-447248 in network mk-addons-447248
	I0910 17:29:28.872232   13790 main.go:141] libmachine: (addons-447248) DBG | I0910 17:29:28.872115   13812 retry.go:31] will retry after 684.176938ms: waiting for machine to come up
	I0910 17:29:29.558253   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:29:29.558804   13790 main.go:141] libmachine: (addons-447248) DBG | unable to find current IP address of domain addons-447248 in network mk-addons-447248
	I0910 17:29:29.558824   13790 main.go:141] libmachine: (addons-447248) DBG | I0910 17:29:29.558740   13812 retry.go:31] will retry after 671.621926ms: waiting for machine to come up
	I0910 17:29:30.231551   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:29:30.231982   13790 main.go:141] libmachine: (addons-447248) DBG | unable to find current IP address of domain addons-447248 in network mk-addons-447248
	I0910 17:29:30.232006   13790 main.go:141] libmachine: (addons-447248) DBG | I0910 17:29:30.231915   13812 retry.go:31] will retry after 838.156808ms: waiting for machine to come up
	I0910 17:29:31.071739   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:29:31.072194   13790 main.go:141] libmachine: (addons-447248) DBG | unable to find current IP address of domain addons-447248 in network mk-addons-447248
	I0910 17:29:31.072217   13790 main.go:141] libmachine: (addons-447248) DBG | I0910 17:29:31.072156   13812 retry.go:31] will retry after 973.342372ms: waiting for machine to come up
	I0910 17:29:32.047371   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:29:32.047740   13790 main.go:141] libmachine: (addons-447248) DBG | unable to find current IP address of domain addons-447248 in network mk-addons-447248
	I0910 17:29:32.047767   13790 main.go:141] libmachine: (addons-447248) DBG | I0910 17:29:32.047695   13812 retry.go:31] will retry after 1.364306492s: waiting for machine to come up
	I0910 17:29:33.414336   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:29:33.414996   13790 main.go:141] libmachine: (addons-447248) DBG | unable to find current IP address of domain addons-447248 in network mk-addons-447248
	I0910 17:29:33.415019   13790 main.go:141] libmachine: (addons-447248) DBG | I0910 17:29:33.414946   13812 retry.go:31] will retry after 1.473203589s: waiting for machine to come up
	I0910 17:29:34.890087   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:29:34.890592   13790 main.go:141] libmachine: (addons-447248) DBG | unable to find current IP address of domain addons-447248 in network mk-addons-447248
	I0910 17:29:34.890621   13790 main.go:141] libmachine: (addons-447248) DBG | I0910 17:29:34.890531   13812 retry.go:31] will retry after 2.36813041s: waiting for machine to come up
	I0910 17:29:37.260269   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:29:37.260714   13790 main.go:141] libmachine: (addons-447248) DBG | unable to find current IP address of domain addons-447248 in network mk-addons-447248
	I0910 17:29:37.260735   13790 main.go:141] libmachine: (addons-447248) DBG | I0910 17:29:37.260681   13812 retry.go:31] will retry after 2.334396562s: waiting for machine to come up
	I0910 17:29:39.598229   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:29:39.598639   13790 main.go:141] libmachine: (addons-447248) DBG | unable to find current IP address of domain addons-447248 in network mk-addons-447248
	I0910 17:29:39.598669   13790 main.go:141] libmachine: (addons-447248) DBG | I0910 17:29:39.598593   13812 retry.go:31] will retry after 4.391361607s: waiting for machine to come up
	I0910 17:29:43.994626   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:29:43.995149   13790 main.go:141] libmachine: (addons-447248) DBG | unable to find current IP address of domain addons-447248 in network mk-addons-447248
	I0910 17:29:43.995181   13790 main.go:141] libmachine: (addons-447248) DBG | I0910 17:29:43.995116   13812 retry.go:31] will retry after 3.481902545s: waiting for machine to come up
	I0910 17:29:47.479685   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:29:47.480239   13790 main.go:141] libmachine: (addons-447248) Found IP for machine: 192.168.39.59
	I0910 17:29:47.480270   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has current primary IP address 192.168.39.59 and MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:29:47.480280   13790 main.go:141] libmachine: (addons-447248) Reserving static IP address...
	I0910 17:29:47.481095   13790 main.go:141] libmachine: (addons-447248) DBG | unable to find host DHCP lease matching {name: "addons-447248", mac: "52:54:00:af:d3:4b", ip: "192.168.39.59"} in network mk-addons-447248
	I0910 17:29:47.644291   13790 main.go:141] libmachine: (addons-447248) DBG | Getting to WaitForSSH function...
	I0910 17:29:47.644325   13790 main.go:141] libmachine: (addons-447248) Reserved static IP address: 192.168.39.59
	I0910 17:29:47.644339   13790 main.go:141] libmachine: (addons-447248) Waiting for SSH to be available...
	I0910 17:29:47.647468   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:29:47.647893   13790 main.go:141] libmachine: (addons-447248) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:d3:4b", ip: ""} in network mk-addons-447248: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:39 +0000 UTC Type:0 Mac:52:54:00:af:d3:4b Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:minikube Clientid:01:52:54:00:af:d3:4b}
	I0910 17:29:47.647919   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined IP address 192.168.39.59 and MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:29:47.648068   13790 main.go:141] libmachine: (addons-447248) DBG | Using SSH client type: external
	I0910 17:29:47.648109   13790 main.go:141] libmachine: (addons-447248) DBG | Using SSH private key: /home/jenkins/minikube-integration/19598-5970/.minikube/machines/addons-447248/id_rsa (-rw-------)
	I0910 17:29:47.648157   13790 main.go:141] libmachine: (addons-447248) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.59 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19598-5970/.minikube/machines/addons-447248/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0910 17:29:47.648172   13790 main.go:141] libmachine: (addons-447248) DBG | About to run SSH command:
	I0910 17:29:47.648181   13790 main.go:141] libmachine: (addons-447248) DBG | exit 0
	I0910 17:29:47.778432   13790 main.go:141] libmachine: (addons-447248) DBG | SSH cmd err, output: <nil>: 
	I0910 17:29:47.778735   13790 main.go:141] libmachine: (addons-447248) KVM machine creation complete!
	I0910 17:29:47.779022   13790 main.go:141] libmachine: (addons-447248) Calling .GetConfigRaw
	I0910 17:29:47.779575   13790 main.go:141] libmachine: (addons-447248) Calling .DriverName
	I0910 17:29:47.779741   13790 main.go:141] libmachine: (addons-447248) Calling .DriverName
	I0910 17:29:47.779907   13790 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0910 17:29:47.779925   13790 main.go:141] libmachine: (addons-447248) Calling .GetState
	I0910 17:29:47.781197   13790 main.go:141] libmachine: Detecting operating system of created instance...
	I0910 17:29:47.781213   13790 main.go:141] libmachine: Waiting for SSH to be available...
	I0910 17:29:47.781219   13790 main.go:141] libmachine: Getting to WaitForSSH function...
	I0910 17:29:47.781227   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHHostname
	I0910 17:29:47.783324   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:29:47.783614   13790 main.go:141] libmachine: (addons-447248) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:d3:4b", ip: ""} in network mk-addons-447248: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:39 +0000 UTC Type:0 Mac:52:54:00:af:d3:4b Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:addons-447248 Clientid:01:52:54:00:af:d3:4b}
	I0910 17:29:47.783638   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined IP address 192.168.39.59 and MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:29:47.783756   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHPort
	I0910 17:29:47.783900   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHKeyPath
	I0910 17:29:47.784065   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHKeyPath
	I0910 17:29:47.784201   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHUsername
	I0910 17:29:47.784334   13790 main.go:141] libmachine: Using SSH client type: native
	I0910 17:29:47.784517   13790 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I0910 17:29:47.784529   13790 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0910 17:29:47.885803   13790 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 17:29:47.885828   13790 main.go:141] libmachine: Detecting the provisioner...
	I0910 17:29:47.885838   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHHostname
	I0910 17:29:47.888480   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:29:47.888806   13790 main.go:141] libmachine: (addons-447248) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:d3:4b", ip: ""} in network mk-addons-447248: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:39 +0000 UTC Type:0 Mac:52:54:00:af:d3:4b Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:addons-447248 Clientid:01:52:54:00:af:d3:4b}
	I0910 17:29:47.888833   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined IP address 192.168.39.59 and MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:29:47.888970   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHPort
	I0910 17:29:47.889150   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHKeyPath
	I0910 17:29:47.889285   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHKeyPath
	I0910 17:29:47.889400   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHUsername
	I0910 17:29:47.889593   13790 main.go:141] libmachine: Using SSH client type: native
	I0910 17:29:47.889776   13790 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I0910 17:29:47.889792   13790 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0910 17:29:47.991004   13790 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0910 17:29:47.991087   13790 main.go:141] libmachine: found compatible host: buildroot
	I0910 17:29:47.991094   13790 main.go:141] libmachine: Provisioning with buildroot...
	I0910 17:29:47.991101   13790 main.go:141] libmachine: (addons-447248) Calling .GetMachineName
	I0910 17:29:47.991343   13790 buildroot.go:166] provisioning hostname "addons-447248"
	I0910 17:29:47.991366   13790 main.go:141] libmachine: (addons-447248) Calling .GetMachineName
	I0910 17:29:47.991522   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHHostname
	I0910 17:29:47.993922   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:29:47.994224   13790 main.go:141] libmachine: (addons-447248) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:d3:4b", ip: ""} in network mk-addons-447248: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:39 +0000 UTC Type:0 Mac:52:54:00:af:d3:4b Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:addons-447248 Clientid:01:52:54:00:af:d3:4b}
	I0910 17:29:47.994253   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined IP address 192.168.39.59 and MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:29:47.994385   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHPort
	I0910 17:29:47.994568   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHKeyPath
	I0910 17:29:47.994710   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHKeyPath
	I0910 17:29:47.994825   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHUsername
	I0910 17:29:47.994969   13790 main.go:141] libmachine: Using SSH client type: native
	I0910 17:29:47.995124   13790 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I0910 17:29:47.995151   13790 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-447248 && echo "addons-447248" | sudo tee /etc/hostname
	I0910 17:29:48.109233   13790 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-447248
	
	I0910 17:29:48.109262   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHHostname
	I0910 17:29:48.112057   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:29:48.112391   13790 main.go:141] libmachine: (addons-447248) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:d3:4b", ip: ""} in network mk-addons-447248: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:39 +0000 UTC Type:0 Mac:52:54:00:af:d3:4b Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:addons-447248 Clientid:01:52:54:00:af:d3:4b}
	I0910 17:29:48.112429   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined IP address 192.168.39.59 and MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:29:48.112698   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHPort
	I0910 17:29:48.113007   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHKeyPath
	I0910 17:29:48.113176   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHKeyPath
	I0910 17:29:48.113326   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHUsername
	I0910 17:29:48.113528   13790 main.go:141] libmachine: Using SSH client type: native
	I0910 17:29:48.113677   13790 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I0910 17:29:48.113691   13790 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-447248' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-447248/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-447248' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0910 17:29:48.222855   13790 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 17:29:48.222880   13790 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19598-5970/.minikube CaCertPath:/home/jenkins/minikube-integration/19598-5970/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19598-5970/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19598-5970/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19598-5970/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19598-5970/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19598-5970/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19598-5970/.minikube}
	I0910 17:29:48.222919   13790 buildroot.go:174] setting up certificates
	I0910 17:29:48.222931   13790 provision.go:84] configureAuth start
	I0910 17:29:48.222939   13790 main.go:141] libmachine: (addons-447248) Calling .GetMachineName
	I0910 17:29:48.223254   13790 main.go:141] libmachine: (addons-447248) Calling .GetIP
	I0910 17:29:48.225916   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:29:48.226247   13790 main.go:141] libmachine: (addons-447248) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:d3:4b", ip: ""} in network mk-addons-447248: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:39 +0000 UTC Type:0 Mac:52:54:00:af:d3:4b Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:addons-447248 Clientid:01:52:54:00:af:d3:4b}
	I0910 17:29:48.226269   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined IP address 192.168.39.59 and MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:29:48.226581   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHHostname
	I0910 17:29:48.228575   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:29:48.229102   13790 main.go:141] libmachine: (addons-447248) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:d3:4b", ip: ""} in network mk-addons-447248: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:39 +0000 UTC Type:0 Mac:52:54:00:af:d3:4b Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:addons-447248 Clientid:01:52:54:00:af:d3:4b}
	I0910 17:29:48.229130   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined IP address 192.168.39.59 and MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:29:48.229258   13790 provision.go:143] copyHostCerts
	I0910 17:29:48.229335   13790 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5970/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19598-5970/.minikube/ca.pem (1078 bytes)
	I0910 17:29:48.229500   13790 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5970/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19598-5970/.minikube/cert.pem (1123 bytes)
	I0910 17:29:48.229602   13790 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5970/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19598-5970/.minikube/key.pem (1679 bytes)
	I0910 17:29:48.229706   13790 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19598-5970/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19598-5970/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19598-5970/.minikube/certs/ca-key.pem org=jenkins.addons-447248 san=[127.0.0.1 192.168.39.59 addons-447248 localhost minikube]
	I0910 17:29:48.431026   13790 provision.go:177] copyRemoteCerts
	I0910 17:29:48.431079   13790 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0910 17:29:48.431101   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHHostname
	I0910 17:29:48.433627   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:29:48.433956   13790 main.go:141] libmachine: (addons-447248) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:d3:4b", ip: ""} in network mk-addons-447248: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:39 +0000 UTC Type:0 Mac:52:54:00:af:d3:4b Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:addons-447248 Clientid:01:52:54:00:af:d3:4b}
	I0910 17:29:48.433986   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined IP address 192.168.39.59 and MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:29:48.434151   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHPort
	I0910 17:29:48.434419   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHKeyPath
	I0910 17:29:48.434618   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHUsername
	I0910 17:29:48.434784   13790 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5970/.minikube/machines/addons-447248/id_rsa Username:docker}
	I0910 17:29:48.518013   13790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5970/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0910 17:29:48.542897   13790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5970/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0910 17:29:48.568021   13790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5970/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0910 17:29:48.594098   13790 provision.go:87] duration metric: took 371.156579ms to configureAuth
	I0910 17:29:48.594124   13790 buildroot.go:189] setting minikube options for container-runtime
	I0910 17:29:48.594337   13790 config.go:182] Loaded profile config "addons-447248": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 17:29:48.594362   13790 main.go:141] libmachine: (addons-447248) Calling .DriverName
	I0910 17:29:48.594654   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHHostname
	I0910 17:29:48.597405   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:29:48.597845   13790 main.go:141] libmachine: (addons-447248) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:d3:4b", ip: ""} in network mk-addons-447248: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:39 +0000 UTC Type:0 Mac:52:54:00:af:d3:4b Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:addons-447248 Clientid:01:52:54:00:af:d3:4b}
	I0910 17:29:48.597877   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined IP address 192.168.39.59 and MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:29:48.598075   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHPort
	I0910 17:29:48.598296   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHKeyPath
	I0910 17:29:48.598466   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHKeyPath
	I0910 17:29:48.598642   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHUsername
	I0910 17:29:48.598816   13790 main.go:141] libmachine: Using SSH client type: native
	I0910 17:29:48.598973   13790 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I0910 17:29:48.598983   13790 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0910 17:29:48.700151   13790 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0910 17:29:48.700174   13790 buildroot.go:70] root file system type: tmpfs
	I0910 17:29:48.700293   13790 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0910 17:29:48.700315   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHHostname
	I0910 17:29:48.703260   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:29:48.703602   13790 main.go:141] libmachine: (addons-447248) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:d3:4b", ip: ""} in network mk-addons-447248: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:39 +0000 UTC Type:0 Mac:52:54:00:af:d3:4b Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:addons-447248 Clientid:01:52:54:00:af:d3:4b}
	I0910 17:29:48.703631   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined IP address 192.168.39.59 and MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:29:48.703858   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHPort
	I0910 17:29:48.704050   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHKeyPath
	I0910 17:29:48.704227   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHKeyPath
	I0910 17:29:48.704365   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHUsername
	I0910 17:29:48.704530   13790 main.go:141] libmachine: Using SSH client type: native
	I0910 17:29:48.704701   13790 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I0910 17:29:48.704757   13790 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0910 17:29:48.819656   13790 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0910 17:29:48.819693   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHHostname
	I0910 17:29:48.822200   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:29:48.822768   13790 main.go:141] libmachine: (addons-447248) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:d3:4b", ip: ""} in network mk-addons-447248: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:39 +0000 UTC Type:0 Mac:52:54:00:af:d3:4b Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:addons-447248 Clientid:01:52:54:00:af:d3:4b}
	I0910 17:29:48.822795   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined IP address 192.168.39.59 and MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:29:48.822988   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHPort
	I0910 17:29:48.823174   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHKeyPath
	I0910 17:29:48.823358   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHKeyPath
	I0910 17:29:48.823535   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHUsername
	I0910 17:29:48.823707   13790 main.go:141] libmachine: Using SSH client type: native
	I0910 17:29:48.823854   13790 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I0910 17:29:48.823870   13790 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0910 17:29:50.582789   13790 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0910 17:29:50.582816   13790 main.go:141] libmachine: Checking connection to Docker...
	I0910 17:29:50.582828   13790 main.go:141] libmachine: (addons-447248) Calling .GetURL
	I0910 17:29:50.584278   13790 main.go:141] libmachine: (addons-447248) DBG | Using libvirt version 6000000
	I0910 17:29:50.586381   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:29:50.586771   13790 main.go:141] libmachine: (addons-447248) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:d3:4b", ip: ""} in network mk-addons-447248: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:39 +0000 UTC Type:0 Mac:52:54:00:af:d3:4b Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:addons-447248 Clientid:01:52:54:00:af:d3:4b}
	I0910 17:29:50.586795   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined IP address 192.168.39.59 and MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:29:50.586969   13790 main.go:141] libmachine: Docker is up and running!
	I0910 17:29:50.586987   13790 main.go:141] libmachine: Reticulating splines...
	I0910 17:29:50.586996   13790 client.go:171] duration metric: took 25.20256624s to LocalClient.Create
	I0910 17:29:50.587022   13790 start.go:167] duration metric: took 25.202629487s to libmachine.API.Create "addons-447248"
	I0910 17:29:50.587034   13790 start.go:293] postStartSetup for "addons-447248" (driver="kvm2")
	I0910 17:29:50.587049   13790 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0910 17:29:50.587065   13790 main.go:141] libmachine: (addons-447248) Calling .DriverName
	I0910 17:29:50.587281   13790 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0910 17:29:50.587301   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHHostname
	I0910 17:29:50.589418   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:29:50.589711   13790 main.go:141] libmachine: (addons-447248) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:d3:4b", ip: ""} in network mk-addons-447248: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:39 +0000 UTC Type:0 Mac:52:54:00:af:d3:4b Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:addons-447248 Clientid:01:52:54:00:af:d3:4b}
	I0910 17:29:50.589739   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined IP address 192.168.39.59 and MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:29:50.589904   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHPort
	I0910 17:29:50.590103   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHKeyPath
	I0910 17:29:50.590247   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHUsername
	I0910 17:29:50.590423   13790 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5970/.minikube/machines/addons-447248/id_rsa Username:docker}
	I0910 17:29:50.668775   13790 ssh_runner.go:195] Run: cat /etc/os-release
	I0910 17:29:50.672607   13790 info.go:137] Remote host: Buildroot 2023.02.9
	I0910 17:29:50.672634   13790 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5970/.minikube/addons for local assets ...
	I0910 17:29:50.672757   13790 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5970/.minikube/files for local assets ...
	I0910 17:29:50.672796   13790 start.go:296] duration metric: took 85.75303ms for postStartSetup
	I0910 17:29:50.672832   13790 main.go:141] libmachine: (addons-447248) Calling .GetConfigRaw
	I0910 17:29:50.673338   13790 main.go:141] libmachine: (addons-447248) Calling .GetIP
	I0910 17:29:50.675976   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:29:50.676462   13790 main.go:141] libmachine: (addons-447248) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:d3:4b", ip: ""} in network mk-addons-447248: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:39 +0000 UTC Type:0 Mac:52:54:00:af:d3:4b Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:addons-447248 Clientid:01:52:54:00:af:d3:4b}
	I0910 17:29:50.676490   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined IP address 192.168.39.59 and MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:29:50.676735   13790 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/addons-447248/config.json ...
	I0910 17:29:50.676943   13790 start.go:128] duration metric: took 25.310883358s to createHost
	I0910 17:29:50.676966   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHHostname
	I0910 17:29:50.679345   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:29:50.679638   13790 main.go:141] libmachine: (addons-447248) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:d3:4b", ip: ""} in network mk-addons-447248: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:39 +0000 UTC Type:0 Mac:52:54:00:af:d3:4b Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:addons-447248 Clientid:01:52:54:00:af:d3:4b}
	I0910 17:29:50.679667   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined IP address 192.168.39.59 and MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:29:50.679775   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHPort
	I0910 17:29:50.679934   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHKeyPath
	I0910 17:29:50.680076   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHKeyPath
	I0910 17:29:50.680267   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHUsername
	I0910 17:29:50.680443   13790 main.go:141] libmachine: Using SSH client type: native
	I0910 17:29:50.680629   13790 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I0910 17:29:50.680642   13790 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0910 17:29:50.782901   13790 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725989390.763561077
	
	I0910 17:29:50.782923   13790 fix.go:216] guest clock: 1725989390.763561077
	I0910 17:29:50.782931   13790 fix.go:229] Guest: 2024-09-10 17:29:50.763561077 +0000 UTC Remote: 2024-09-10 17:29:50.676955648 +0000 UTC m=+25.412611693 (delta=86.605429ms)
	I0910 17:29:50.782969   13790 fix.go:200] guest clock delta is within tolerance: 86.605429ms
	I0910 17:29:50.782974   13790 start.go:83] releasing machines lock for "addons-447248", held for 25.41701603s
	I0910 17:29:50.782991   13790 main.go:141] libmachine: (addons-447248) Calling .DriverName
	I0910 17:29:50.783270   13790 main.go:141] libmachine: (addons-447248) Calling .GetIP
	I0910 17:29:50.785984   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:29:50.786381   13790 main.go:141] libmachine: (addons-447248) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:d3:4b", ip: ""} in network mk-addons-447248: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:39 +0000 UTC Type:0 Mac:52:54:00:af:d3:4b Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:addons-447248 Clientid:01:52:54:00:af:d3:4b}
	I0910 17:29:50.786412   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined IP address 192.168.39.59 and MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:29:50.786610   13790 main.go:141] libmachine: (addons-447248) Calling .DriverName
	I0910 17:29:50.787176   13790 main.go:141] libmachine: (addons-447248) Calling .DriverName
	I0910 17:29:50.787348   13790 main.go:141] libmachine: (addons-447248) Calling .DriverName
	I0910 17:29:50.787483   13790 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0910 17:29:50.787539   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHHostname
	I0910 17:29:50.787539   13790 ssh_runner.go:195] Run: cat /version.json
	I0910 17:29:50.787602   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHHostname
	I0910 17:29:50.790078   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:29:50.790176   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:29:50.790423   13790 main.go:141] libmachine: (addons-447248) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:d3:4b", ip: ""} in network mk-addons-447248: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:39 +0000 UTC Type:0 Mac:52:54:00:af:d3:4b Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:addons-447248 Clientid:01:52:54:00:af:d3:4b}
	I0910 17:29:50.790449   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined IP address 192.168.39.59 and MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:29:50.790476   13790 main.go:141] libmachine: (addons-447248) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:d3:4b", ip: ""} in network mk-addons-447248: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:39 +0000 UTC Type:0 Mac:52:54:00:af:d3:4b Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:addons-447248 Clientid:01:52:54:00:af:d3:4b}
	I0910 17:29:50.790491   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined IP address 192.168.39.59 and MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:29:50.790592   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHPort
	I0910 17:29:50.790786   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHPort
	I0910 17:29:50.790797   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHKeyPath
	I0910 17:29:50.790984   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHUsername
	I0910 17:29:50.790992   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHKeyPath
	I0910 17:29:50.791162   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHUsername
	I0910 17:29:50.791157   13790 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5970/.minikube/machines/addons-447248/id_rsa Username:docker}
	I0910 17:29:50.791316   13790 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5970/.minikube/machines/addons-447248/id_rsa Username:docker}
	I0910 17:29:50.867371   13790 ssh_runner.go:195] Run: systemctl --version
	I0910 17:29:50.892152   13790 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0910 17:29:50.897792   13790 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0910 17:29:50.897858   13790 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0910 17:29:50.914325   13790 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0910 17:29:50.914357   13790 start.go:495] detecting cgroup driver to use...
	I0910 17:29:50.914496   13790 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 17:29:50.932647   13790 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0910 17:29:50.942992   13790 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0910 17:29:50.954237   13790 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0910 17:29:50.954315   13790 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0910 17:29:50.964973   13790 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0910 17:29:50.975657   13790 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0910 17:29:50.986337   13790 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0910 17:29:50.997198   13790 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0910 17:29:51.008403   13790 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0910 17:29:51.018980   13790 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0910 17:29:51.029284   13790 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0910 17:29:51.040221   13790 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0910 17:29:51.050188   13790 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0910 17:29:51.060259   13790 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 17:29:51.172378   13790 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0910 17:29:51.196829   13790 start.go:495] detecting cgroup driver to use...
	I0910 17:29:51.196910   13790 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0910 17:29:51.217755   13790 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 17:29:51.234151   13790 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0910 17:29:51.249542   13790 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 17:29:51.264522   13790 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0910 17:29:51.277621   13790 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0910 17:29:51.305480   13790 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0910 17:29:51.319298   13790 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 17:29:51.337043   13790 ssh_runner.go:195] Run: which cri-dockerd
	I0910 17:29:51.340630   13790 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0910 17:29:51.349286   13790 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0910 17:29:51.365930   13790 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0910 17:29:51.485452   13790 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0910 17:29:51.599064   13790 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0910 17:29:51.599201   13790 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0910 17:29:51.616468   13790 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 17:29:51.730669   13790 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0910 17:29:54.089430   13790 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.358717409s)
	I0910 17:29:54.089517   13790 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0910 17:29:54.104800   13790 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0910 17:29:54.118648   13790 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0910 17:29:54.234470   13790 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0910 17:29:54.354050   13790 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 17:29:54.464160   13790 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0910 17:29:54.482368   13790 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0910 17:29:54.495612   13790 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 17:29:54.609816   13790 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0910 17:29:54.685086   13790 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0910 17:29:54.685179   13790 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0910 17:29:54.690794   13790 start.go:563] Will wait 60s for crictl version
	I0910 17:29:54.690857   13790 ssh_runner.go:195] Run: which crictl
	I0910 17:29:54.694469   13790 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0910 17:29:54.732305   13790 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.0
	RuntimeApiVersion:  v1
	I0910 17:29:54.732367   13790 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0910 17:29:54.757480   13790 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0910 17:29:54.786749   13790 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.2.0 ...
	I0910 17:29:54.786796   13790 main.go:141] libmachine: (addons-447248) Calling .GetIP
	I0910 17:29:54.789358   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:29:54.789713   13790 main.go:141] libmachine: (addons-447248) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:d3:4b", ip: ""} in network mk-addons-447248: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:39 +0000 UTC Type:0 Mac:52:54:00:af:d3:4b Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:addons-447248 Clientid:01:52:54:00:af:d3:4b}
	I0910 17:29:54.789736   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined IP address 192.168.39.59 and MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:29:54.789931   13790 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0910 17:29:54.793959   13790 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 17:29:54.806478   13790 kubeadm.go:883] updating cluster {Name:addons-447248 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:addons-447248 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.59 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0910 17:29:54.806643   13790 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0910 17:29:54.806695   13790 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0910 17:29:54.822960   13790 docker.go:685] Got preloaded images: 
	I0910 17:29:54.822982   13790 docker.go:691] registry.k8s.io/kube-apiserver:v1.31.0 wasn't preloaded
	I0910 17:29:54.823021   13790 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0910 17:29:54.832427   13790 ssh_runner.go:195] Run: which lz4
	I0910 17:29:54.836429   13790 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0910 17:29:54.840380   13790 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0910 17:29:54.840417   13790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5970/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (342554258 bytes)
	I0910 17:29:56.004247   13790 docker.go:649] duration metric: took 1.167852028s to copy over tarball
	I0910 17:29:56.004311   13790 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0910 17:29:57.886882   13790 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.882538005s)
	I0910 17:29:57.886925   13790 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0910 17:29:57.919638   13790 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0910 17:29:57.929112   13790 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0910 17:29:57.945854   13790 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 17:29:58.055533   13790 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0910 17:30:02.001809   13790 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.946234322s)
	I0910 17:30:02.001904   13790 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0910 17:30:02.023946   13790 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0910 17:30:02.023973   13790 cache_images.go:84] Images are preloaded, skipping loading
	I0910 17:30:02.023999   13790 kubeadm.go:934] updating node { 192.168.39.59 8443 v1.31.0 docker true true} ...
	I0910 17:30:02.024128   13790 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-447248 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.59
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-447248 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0910 17:30:02.024216   13790 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0910 17:30:02.076542   13790 cni.go:84] Creating CNI manager for ""
	I0910 17:30:02.076575   13790 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0910 17:30:02.076589   13790 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0910 17:30:02.076615   13790 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.59 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-447248 NodeName:addons-447248 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.59"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.59 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0910 17:30:02.076761   13790 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.59
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-447248"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.59
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.59"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0910 17:30:02.076828   13790 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0910 17:30:02.087776   13790 binaries.go:44] Found k8s binaries, skipping transfer
	I0910 17:30:02.087846   13790 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0910 17:30:02.097525   13790 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0910 17:30:02.114759   13790 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0910 17:30:02.132670   13790 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0910 17:30:02.149924   13790 ssh_runner.go:195] Run: grep 192.168.39.59	control-plane.minikube.internal$ /etc/hosts
	I0910 17:30:02.153721   13790 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.59	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 17:30:02.166081   13790 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 17:30:02.276589   13790 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 17:30:02.298293   13790 certs.go:68] Setting up /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/addons-447248 for IP: 192.168.39.59
	I0910 17:30:02.298324   13790 certs.go:194] generating shared ca certs ...
	I0910 17:30:02.298344   13790 certs.go:226] acquiring lock for ca certs: {Name:mkc07563733fc05632d202c31bba96b872246f53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:30:02.298480   13790 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19598-5970/.minikube/ca.key
	I0910 17:30:02.490687   13790 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19598-5970/.minikube/ca.crt ...
	I0910 17:30:02.490720   13790 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5970/.minikube/ca.crt: {Name:mk171e993a03e3d9223dfade60ad1b143af61429 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:30:02.490907   13790 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19598-5970/.minikube/ca.key ...
	I0910 17:30:02.490921   13790 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5970/.minikube/ca.key: {Name:mke05e4f9bdf72f239a23645bd090df0b8401a90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:30:02.491018   13790 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19598-5970/.minikube/proxy-client-ca.key
	I0910 17:30:02.713994   13790 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19598-5970/.minikube/proxy-client-ca.crt ...
	I0910 17:30:02.714031   13790 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5970/.minikube/proxy-client-ca.crt: {Name:mkfe72615b1a654b1094c11b2d4af9c4bad03bc1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:30:02.714197   13790 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19598-5970/.minikube/proxy-client-ca.key ...
	I0910 17:30:02.714208   13790 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5970/.minikube/proxy-client-ca.key: {Name:mkfd9dc597c118450e977c637dea53eb37a7990c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:30:02.714278   13790 certs.go:256] generating profile certs ...
	I0910 17:30:02.714331   13790 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/addons-447248/client.key
	I0910 17:30:02.714346   13790 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/addons-447248/client.crt with IP's: []
	I0910 17:30:03.156975   13790 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/addons-447248/client.crt ...
	I0910 17:30:03.157008   13790 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/addons-447248/client.crt: {Name:mk4e4a5d18a3c9ba41e4cc9fc56d3426f1115c2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:30:03.157174   13790 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/addons-447248/client.key ...
	I0910 17:30:03.157191   13790 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/addons-447248/client.key: {Name:mk47e29d1dd960fe5f62c6072e78227a62b39b51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:30:03.157263   13790 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/addons-447248/apiserver.key.6db6f5d3
	I0910 17:30:03.157281   13790 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/addons-447248/apiserver.crt.6db6f5d3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.59]
	I0910 17:30:03.270118   13790 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/addons-447248/apiserver.crt.6db6f5d3 ...
	I0910 17:30:03.270159   13790 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/addons-447248/apiserver.crt.6db6f5d3: {Name:mkec6ca5234e6b04a37ce110862c21325f70d17d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:30:03.270330   13790 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/addons-447248/apiserver.key.6db6f5d3 ...
	I0910 17:30:03.270344   13790 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/addons-447248/apiserver.key.6db6f5d3: {Name:mkfb82230fdee7f42657c31cdbce04fb42a47297 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:30:03.270417   13790 certs.go:381] copying /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/addons-447248/apiserver.crt.6db6f5d3 -> /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/addons-447248/apiserver.crt
	I0910 17:30:03.270492   13790 certs.go:385] copying /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/addons-447248/apiserver.key.6db6f5d3 -> /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/addons-447248/apiserver.key
	I0910 17:30:03.270565   13790 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/addons-447248/proxy-client.key
	I0910 17:30:03.270581   13790 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/addons-447248/proxy-client.crt with IP's: []
	I0910 17:30:03.318822   13790 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/addons-447248/proxy-client.crt ...
	I0910 17:30:03.318851   13790 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/addons-447248/proxy-client.crt: {Name:mk7b01c39d1e2f85de58d4a399ec3942e7886be8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:30:03.319007   13790 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/addons-447248/proxy-client.key ...
	I0910 17:30:03.319018   13790 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/addons-447248/proxy-client.key: {Name:mkf83150a25743f65754ec26677567bfe34b7185 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:30:03.319182   13790 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5970/.minikube/certs/ca-key.pem (1675 bytes)
	I0910 17:30:03.319216   13790 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5970/.minikube/certs/ca.pem (1078 bytes)
	I0910 17:30:03.319240   13790 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5970/.minikube/certs/cert.pem (1123 bytes)
	I0910 17:30:03.319264   13790 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5970/.minikube/certs/key.pem (1679 bytes)
	I0910 17:30:03.319825   13790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5970/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0910 17:30:03.346021   13790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5970/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0910 17:30:03.370480   13790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5970/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0910 17:30:03.396286   13790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5970/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0910 17:30:03.420796   13790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/addons-447248/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0910 17:30:03.445260   13790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/addons-447248/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0910 17:30:03.469485   13790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/addons-447248/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0910 17:30:03.493390   13790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/addons-447248/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0910 17:30:03.523163   13790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19598-5970/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0910 17:30:03.547832   13790 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0910 17:30:03.564795   13790 ssh_runner.go:195] Run: openssl version
	I0910 17:30:03.570566   13790 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0910 17:30:03.581640   13790 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0910 17:30:03.586104   13790 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 10 17:30 /usr/share/ca-certificates/minikubeCA.pem
	I0910 17:30:03.586171   13790 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0910 17:30:03.592006   13790 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0910 17:30:03.603058   13790 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0910 17:30:03.607474   13790 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0910 17:30:03.607572   13790 kubeadm.go:392] StartCluster: {Name:addons-447248 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:addons-447248 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.59 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 17:30:03.607685   13790 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0910 17:30:03.623209   13790 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0910 17:30:03.636531   13790 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0910 17:30:03.648423   13790 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0910 17:30:03.660913   13790 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0910 17:30:03.660935   13790 kubeadm.go:157] found existing configuration files:
	
	I0910 17:30:03.661007   13790 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0910 17:30:03.673557   13790 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0910 17:30:03.673622   13790 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0910 17:30:03.685652   13790 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0910 17:30:03.695014   13790 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0910 17:30:03.695088   13790 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0910 17:30:03.704776   13790 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0910 17:30:03.714074   13790 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0910 17:30:03.714132   13790 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0910 17:30:03.723767   13790 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0910 17:30:03.733215   13790 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0910 17:30:03.733291   13790 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0910 17:30:03.742801   13790 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0910 17:30:03.792542   13790 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0910 17:30:03.792652   13790 kubeadm.go:310] [preflight] Running pre-flight checks
	I0910 17:30:03.886030   13790 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0910 17:30:03.886182   13790 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0910 17:30:03.886320   13790 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0910 17:30:03.900172   13790 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0910 17:30:03.903164   13790 out.go:235]   - Generating certificates and keys ...
	I0910 17:30:03.904875   13790 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0910 17:30:03.904958   13790 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0910 17:30:04.014024   13790 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0910 17:30:04.127590   13790 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0910 17:30:04.271885   13790 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0910 17:30:04.324779   13790 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0910 17:30:04.539497   13790 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0910 17:30:04.540139   13790 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-447248 localhost] and IPs [192.168.39.59 127.0.0.1 ::1]
	I0910 17:30:04.821160   13790 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0910 17:30:04.821366   13790 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-447248 localhost] and IPs [192.168.39.59 127.0.0.1 ::1]
	I0910 17:30:04.933904   13790 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0910 17:30:05.043993   13790 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0910 17:30:05.240266   13790 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0910 17:30:05.240514   13790 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0910 17:30:05.570273   13790 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0910 17:30:05.656175   13790 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0910 17:30:05.751357   13790 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0910 17:30:05.977318   13790 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0910 17:30:06.218029   13790 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0910 17:30:06.218803   13790 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0910 17:30:06.221179   13790 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0910 17:30:06.223004   13790 out.go:235]   - Booting up control plane ...
	I0910 17:30:06.223122   13790 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0910 17:30:06.223243   13790 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0910 17:30:06.223341   13790 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0910 17:30:06.238356   13790 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0910 17:30:06.245309   13790 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0910 17:30:06.245400   13790 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0910 17:30:06.378444   13790 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0910 17:30:06.378613   13790 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0910 17:30:06.880299   13790 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.95664ms
	I0910 17:30:06.880409   13790 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0910 17:30:12.384376   13790 kubeadm.go:310] [api-check] The API server is healthy after 5.504847708s
	I0910 17:30:12.399680   13790 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0910 17:30:12.416670   13790 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0910 17:30:12.451705   13790 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0910 17:30:12.451925   13790 kubeadm.go:310] [mark-control-plane] Marking the node addons-447248 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0910 17:30:12.465334   13790 kubeadm.go:310] [bootstrap-token] Using token: 6jn6xg.l7r7qve2d8cdduij
	I0910 17:30:12.466735   13790 out.go:235]   - Configuring RBAC rules ...
	I0910 17:30:12.466907   13790 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0910 17:30:12.474204   13790 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0910 17:30:12.487479   13790 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0910 17:30:12.494147   13790 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0910 17:30:12.498254   13790 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0910 17:30:12.502957   13790 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0910 17:30:12.793221   13790 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0910 17:30:13.230850   13790 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0910 17:30:13.791103   13790 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0910 17:30:13.791999   13790 kubeadm.go:310] 
	I0910 17:30:13.792088   13790 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0910 17:30:13.792101   13790 kubeadm.go:310] 
	I0910 17:30:13.792182   13790 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0910 17:30:13.792191   13790 kubeadm.go:310] 
	I0910 17:30:13.792225   13790 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0910 17:30:13.792293   13790 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0910 17:30:13.792379   13790 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0910 17:30:13.792403   13790 kubeadm.go:310] 
	I0910 17:30:13.792491   13790 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0910 17:30:13.792516   13790 kubeadm.go:310] 
	I0910 17:30:13.792588   13790 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0910 17:30:13.792597   13790 kubeadm.go:310] 
	I0910 17:30:13.792665   13790 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0910 17:30:13.792774   13790 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0910 17:30:13.792861   13790 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0910 17:30:13.792872   13790 kubeadm.go:310] 
	I0910 17:30:13.792981   13790 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0910 17:30:13.793099   13790 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0910 17:30:13.793112   13790 kubeadm.go:310] 
	I0910 17:30:13.793185   13790 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 6jn6xg.l7r7qve2d8cdduij \
	I0910 17:30:13.793273   13790 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0824dd8005c4f30e6fff63ca4f2d74288d1f0328ebfb786dc83fd8d5f03a8d3d \
	I0910 17:30:13.793328   13790 kubeadm.go:310] 	--control-plane 
	I0910 17:30:13.793348   13790 kubeadm.go:310] 
	I0910 17:30:13.793461   13790 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0910 17:30:13.793473   13790 kubeadm.go:310] 
	I0910 17:30:13.793564   13790 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 6jn6xg.l7r7qve2d8cdduij \
	I0910 17:30:13.793681   13790 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0824dd8005c4f30e6fff63ca4f2d74288d1f0328ebfb786dc83fd8d5f03a8d3d 
	I0910 17:30:13.794746   13790 kubeadm.go:310] W0910 17:30:03.772724    1512 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0910 17:30:13.795091   13790 kubeadm.go:310] W0910 17:30:03.773686    1512 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0910 17:30:13.795221   13790 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0910 17:30:13.795258   13790 cni.go:84] Creating CNI manager for ""
	I0910 17:30:13.795275   13790 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0910 17:30:13.797633   13790 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0910 17:30:13.799215   13790 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0910 17:30:13.810441   13790 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0910 17:30:13.830570   13790 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0910 17:30:13.830639   13790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:30:13.830649   13790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-447248 minikube.k8s.io/updated_at=2024_09_10T17_30_13_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18 minikube.k8s.io/name=addons-447248 minikube.k8s.io/primary=true
	I0910 17:30:13.966127   13790 ops.go:34] apiserver oom_adj: -16
	I0910 17:30:13.966184   13790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:30:14.466217   13790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:30:14.967106   13790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:30:15.466593   13790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:30:15.967046   13790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:30:16.466221   13790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:30:16.966989   13790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:30:17.467105   13790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:30:17.966402   13790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:30:18.041792   13790 kubeadm.go:1113] duration metric: took 4.211210735s to wait for elevateKubeSystemPrivileges
	I0910 17:30:18.041828   13790 kubeadm.go:394] duration metric: took 14.43429883s to StartCluster
	I0910 17:30:18.041845   13790 settings.go:142] acquiring lock: {Name:mk86384da84eca6f59be3ba3e9d4a7e79c3e17db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:30:18.041983   13790 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19598-5970/kubeconfig
	I0910 17:30:18.042493   13790 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5970/kubeconfig: {Name:mk9a34c2924729664300d5f57b274110e1678a91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:30:18.042755   13790 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0910 17:30:18.042760   13790 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.59 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 17:30:18.042880   13790 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0910 17:30:18.042996   13790 addons.go:69] Setting yakd=true in profile "addons-447248"
	I0910 17:30:18.043029   13790 addons.go:234] Setting addon yakd=true in "addons-447248"
	I0910 17:30:18.043035   13790 config.go:182] Loaded profile config "addons-447248": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 17:30:18.043064   13790 host.go:66] Checking if "addons-447248" exists ...
	I0910 17:30:18.043064   13790 addons.go:69] Setting gcp-auth=true in profile "addons-447248"
	I0910 17:30:18.043061   13790 addons.go:69] Setting inspektor-gadget=true in profile "addons-447248"
	I0910 17:30:18.043093   13790 addons.go:69] Setting cloud-spanner=true in profile "addons-447248"
	I0910 17:30:18.043104   13790 mustload.go:65] Loading cluster: addons-447248
	I0910 17:30:18.043124   13790 addons.go:234] Setting addon inspektor-gadget=true in "addons-447248"
	I0910 17:30:18.043130   13790 addons.go:234] Setting addon cloud-spanner=true in "addons-447248"
	I0910 17:30:18.043142   13790 addons.go:69] Setting metrics-server=true in profile "addons-447248"
	I0910 17:30:18.043154   13790 host.go:66] Checking if "addons-447248" exists ...
	I0910 17:30:18.043175   13790 addons.go:69] Setting default-storageclass=true in profile "addons-447248"
	I0910 17:30:18.043421   13790 host.go:66] Checking if "addons-447248" exists ...
	I0910 17:30:18.043462   13790 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-447248"
	I0910 17:30:18.043501   13790 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-447248"
	I0910 17:30:18.043539   13790 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-447248"
	I0910 17:30:18.043598   13790 host.go:66] Checking if "addons-447248" exists ...
	I0910 17:30:18.043639   13790 addons.go:69] Setting registry=true in profile "addons-447248"
	I0910 17:30:18.043187   13790 addons.go:234] Setting addon metrics-server=true in "addons-447248"
	I0910 17:30:18.043667   13790 addons.go:234] Setting addon registry=true in "addons-447248"
	I0910 17:30:18.043668   13790 addons.go:69] Setting volcano=true in profile "addons-447248"
	I0910 17:30:18.043693   13790 host.go:66] Checking if "addons-447248" exists ...
	I0910 17:30:18.043701   13790 addons.go:234] Setting addon volcano=true in "addons-447248"
	I0910 17:30:18.043722   13790 addons.go:69] Setting helm-tiller=true in profile "addons-447248"
	I0910 17:30:18.043748   13790 addons.go:234] Setting addon helm-tiller=true in "addons-447248"
	I0910 17:30:18.043775   13790 host.go:66] Checking if "addons-447248" exists ...
	I0910 17:30:18.043791   13790 addons.go:69] Setting ingress-dns=true in profile "addons-447248"
	I0910 17:30:18.043826   13790 addons.go:234] Setting addon ingress-dns=true in "addons-447248"
	I0910 17:30:18.043847   13790 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-447248"
	I0910 17:30:18.043861   13790 host.go:66] Checking if "addons-447248" exists ...
	I0910 17:30:18.043856   13790 config.go:182] Loaded profile config "addons-447248": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 17:30:18.043873   13790 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-447248"
	I0910 17:30:18.044221   13790 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0910 17:30:18.044245   13790 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:18.043114   13790 addons.go:69] Setting storage-provisioner=true in profile "addons-447248"
	I0910 17:30:18.044299   13790 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0910 17:30:18.044301   13790 addons.go:234] Setting addon storage-provisioner=true in "addons-447248"
	I0910 17:30:18.044311   13790 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0910 17:30:18.044330   13790 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:18.044336   13790 host.go:66] Checking if "addons-447248" exists ...
	I0910 17:30:18.044345   13790 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0910 17:30:18.044350   13790 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:18.044388   13790 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:18.043087   13790 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-447248"
	I0910 17:30:18.044456   13790 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-447248"
	I0910 17:30:18.044460   13790 addons.go:69] Setting volumesnapshots=true in profile "addons-447248"
	I0910 17:30:18.044489   13790 addons.go:234] Setting addon volumesnapshots=true in "addons-447248"
	I0910 17:30:18.044854   13790 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0910 17:30:18.044887   13790 out.go:177] * Verifying Kubernetes components...
	I0910 17:30:18.044898   13790 addons.go:69] Setting ingress=true in profile "addons-447248"
	I0910 17:30:18.044946   13790 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:18.044873   13790 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0910 17:30:18.045012   13790 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0910 17:30:18.045087   13790 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:18.045104   13790 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:18.045160   13790 host.go:66] Checking if "addons-447248" exists ...
	I0910 17:30:18.045301   13790 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0910 17:30:18.045350   13790 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:18.045574   13790 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0910 17:30:18.045608   13790 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:18.045610   13790 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0910 17:30:18.045637   13790 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:18.044890   13790 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0910 17:30:18.045785   13790 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:18.044929   13790 addons.go:234] Setting addon ingress=true in "addons-447248"
	I0910 17:30:18.046070   13790 host.go:66] Checking if "addons-447248" exists ...
	I0910 17:30:18.046456   13790 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0910 17:30:18.046495   13790 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:18.046502   13790 host.go:66] Checking if "addons-447248" exists ...
	I0910 17:30:18.046580   13790 host.go:66] Checking if "addons-447248" exists ...
	I0910 17:30:18.047012   13790 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 17:30:18.047353   13790 host.go:66] Checking if "addons-447248" exists ...
	I0910 17:30:18.047696   13790 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0910 17:30:18.047727   13790 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:18.066544   13790 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43385
	I0910 17:30:18.066573   13790 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43097
	I0910 17:30:18.066671   13790 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44241
	I0910 17:30:18.067640   13790 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:18.067642   13790 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:18.067646   13790 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:18.068201   13790 main.go:141] libmachine: Using API Version  1
	I0910 17:30:18.068227   13790 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:18.068228   13790 main.go:141] libmachine: Using API Version  1
	I0910 17:30:18.068242   13790 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:18.068255   13790 main.go:141] libmachine: Using API Version  1
	I0910 17:30:18.068272   13790 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:18.068612   13790 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:18.068627   13790 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:18.068635   13790 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:18.069288   13790 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0910 17:30:18.069296   13790 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0910 17:30:18.069315   13790 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:18.069332   13790 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:18.069390   13790 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0910 17:30:18.069421   13790 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:18.074937   13790 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32793
	I0910 17:30:18.075542   13790 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:18.076263   13790 main.go:141] libmachine: Using API Version  1
	I0910 17:30:18.076282   13790 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:18.077094   13790 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38129
	I0910 17:30:18.077041   13790 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36927
	I0910 17:30:18.077405   13790 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:18.077604   13790 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:18.077709   13790 main.go:141] libmachine: (addons-447248) Calling .GetState
	I0910 17:30:18.077754   13790 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:18.078110   13790 main.go:141] libmachine: Using API Version  1
	I0910 17:30:18.078140   13790 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:18.078253   13790 main.go:141] libmachine: Using API Version  1
	I0910 17:30:18.078274   13790 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:18.078501   13790 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:18.078947   13790 main.go:141] libmachine: (addons-447248) Calling .GetState
	I0910 17:30:18.078967   13790 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:18.080127   13790 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0910 17:30:18.080167   13790 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:18.080805   13790 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0910 17:30:18.080846   13790 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:18.083102   13790 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0910 17:30:18.083136   13790 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:18.085213   13790 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36749
	I0910 17:30:18.085390   13790 host.go:66] Checking if "addons-447248" exists ...
	I0910 17:30:18.085884   13790 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0910 17:30:18.085928   13790 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:18.086697   13790 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0910 17:30:18.086737   13790 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:18.087003   13790 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:18.087594   13790 main.go:141] libmachine: Using API Version  1
	I0910 17:30:18.087616   13790 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:18.089896   13790 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:18.093672   13790 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39663
	I0910 17:30:18.094648   13790 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:18.111316   13790 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45383
	I0910 17:30:18.111339   13790 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34051
	I0910 17:30:18.111882   13790 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0910 17:30:18.111927   13790 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:18.112167   13790 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:18.112181   13790 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-447248"
	I0910 17:30:18.112218   13790 host.go:66] Checking if "addons-447248" exists ...
	I0910 17:30:18.112436   13790 main.go:141] libmachine: Using API Version  1
	I0910 17:30:18.112449   13790 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:18.112589   13790 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0910 17:30:18.112624   13790 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:18.113039   13790 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:18.113232   13790 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:18.113382   13790 main.go:141] libmachine: Using API Version  1
	I0910 17:30:18.113420   13790 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:18.113716   13790 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0910 17:30:18.113736   13790 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:18.113742   13790 main.go:141] libmachine: Using API Version  1
	I0910 17:30:18.113757   13790 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:18.114107   13790 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:18.114279   13790 main.go:141] libmachine: (addons-447248) Calling .GetState
	I0910 17:30:18.116643   13790 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:18.116738   13790 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41543
	I0910 17:30:18.117446   13790 addons.go:234] Setting addon default-storageclass=true in "addons-447248"
	I0910 17:30:18.117458   13790 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:18.117496   13790 host.go:66] Checking if "addons-447248" exists ...
	I0910 17:30:18.117847   13790 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0910 17:30:18.117872   13790 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:18.118093   13790 main.go:141] libmachine: Using API Version  1
	I0910 17:30:18.118121   13790 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:18.118212   13790 main.go:141] libmachine: (addons-447248) Calling .GetState
	I0910 17:30:18.118593   13790 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:18.118814   13790 main.go:141] libmachine: (addons-447248) Calling .GetState
	I0910 17:30:18.120842   13790 main.go:141] libmachine: (addons-447248) Calling .DriverName
	I0910 17:30:18.121399   13790 main.go:141] libmachine: (addons-447248) Calling .DriverName
	I0910 17:30:18.123878   13790 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0910 17:30:18.123949   13790 out.go:177]   - Using image docker.io/registry:2.8.3
	I0910 17:30:18.125490   13790 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0910 17:30:18.125511   13790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0910 17:30:18.125539   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHHostname
	I0910 17:30:18.127604   13790 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0910 17:30:18.128965   13790 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0910 17:30:18.128983   13790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0910 17:30:18.129007   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHHostname
	I0910 17:30:18.129248   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:30:18.131045   13790 main.go:141] libmachine: (addons-447248) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:d3:4b", ip: ""} in network mk-addons-447248: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:39 +0000 UTC Type:0 Mac:52:54:00:af:d3:4b Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:addons-447248 Clientid:01:52:54:00:af:d3:4b}
	I0910 17:30:18.131083   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined IP address 192.168.39.59 and MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:30:18.131587   13790 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46501
	I0910 17:30:18.132060   13790 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:18.132620   13790 main.go:141] libmachine: Using API Version  1
	I0910 17:30:18.132639   13790 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:18.132651   13790 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42673
	I0910 17:30:18.132699   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:30:18.133000   13790 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:18.133058   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHPort
	I0910 17:30:18.133156   13790 main.go:141] libmachine: (addons-447248) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:d3:4b", ip: ""} in network mk-addons-447248: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:39 +0000 UTC Type:0 Mac:52:54:00:af:d3:4b Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:addons-447248 Clientid:01:52:54:00:af:d3:4b}
	I0910 17:30:18.133180   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined IP address 192.168.39.59 and MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:30:18.133326   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHKeyPath
	I0910 17:30:18.133375   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHPort
	I0910 17:30:18.133498   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHUsername
	I0910 17:30:18.133545   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHKeyPath
	I0910 17:30:18.133660   13790 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5970/.minikube/machines/addons-447248/id_rsa Username:docker}
	I0910 17:30:18.133901   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHUsername
	I0910 17:30:18.134413   13790 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0910 17:30:18.134457   13790 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:18.134577   13790 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5970/.minikube/machines/addons-447248/id_rsa Username:docker}
	I0910 17:30:18.134830   13790 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36495
	I0910 17:30:18.134951   13790 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:18.135314   13790 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38503
	I0910 17:30:18.135603   13790 main.go:141] libmachine: Using API Version  1
	I0910 17:30:18.135632   13790 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:18.135704   13790 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43907
	I0910 17:30:18.135942   13790 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:18.136121   13790 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:18.136257   13790 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:18.136312   13790 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:18.136792   13790 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0910 17:30:18.136809   13790 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:18.137136   13790 main.go:141] libmachine: Using API Version  1
	I0910 17:30:18.137153   13790 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:18.137322   13790 main.go:141] libmachine: Using API Version  1
	I0910 17:30:18.137338   13790 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:18.137491   13790 main.go:141] libmachine: Using API Version  1
	I0910 17:30:18.137509   13790 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:18.137542   13790 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:18.138085   13790 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:18.138437   13790 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0910 17:30:18.138478   13790 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:18.138698   13790 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0910 17:30:18.138724   13790 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:18.139009   13790 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:18.139256   13790 main.go:141] libmachine: (addons-447248) Calling .GetState
	I0910 17:30:18.140988   13790 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45479
	I0910 17:30:18.141154   13790 main.go:141] libmachine: (addons-447248) Calling .DriverName
	I0910 17:30:18.141469   13790 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:18.142017   13790 main.go:141] libmachine: Using API Version  1
	I0910 17:30:18.142043   13790 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:18.142403   13790 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:18.142642   13790 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37133
	I0910 17:30:18.143034   13790 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:18.143504   13790 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0910 17:30:18.143510   13790 main.go:141] libmachine: Using API Version  1
	I0910 17:30:18.143528   13790 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:18.143845   13790 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:18.144506   13790 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0910 17:30:18.144564   13790 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:18.144801   13790 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40133
	I0910 17:30:18.144831   13790 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33541
	I0910 17:30:18.145159   13790 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0910 17:30:18.145178   13790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0910 17:30:18.145199   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHHostname
	I0910 17:30:18.145260   13790 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:18.145336   13790 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0910 17:30:18.145374   13790 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:18.145758   13790 main.go:141] libmachine: Using API Version  1
	I0910 17:30:18.145773   13790 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:18.145834   13790 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:18.146212   13790 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:18.146466   13790 main.go:141] libmachine: (addons-447248) Calling .GetState
	I0910 17:30:18.146956   13790 main.go:141] libmachine: Using API Version  1
	I0910 17:30:18.147048   13790 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:18.147399   13790 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:18.147588   13790 main.go:141] libmachine: (addons-447248) Calling .DriverName
	I0910 17:30:18.147664   13790 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39325
	I0910 17:30:18.148170   13790 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:18.148720   13790 main.go:141] libmachine: Using API Version  1
	I0910 17:30:18.148737   13790 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:18.149115   13790 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:18.149712   13790 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0910 17:30:18.149755   13790 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:18.149960   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:30:18.149986   13790 main.go:141] libmachine: (addons-447248) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:d3:4b", ip: ""} in network mk-addons-447248: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:39 +0000 UTC Type:0 Mac:52:54:00:af:d3:4b Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:addons-447248 Clientid:01:52:54:00:af:d3:4b}
	I0910 17:30:18.150006   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined IP address 192.168.39.59 and MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:30:18.150041   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHPort
	I0910 17:30:18.150273   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHKeyPath
	I0910 17:30:18.150441   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHUsername
	I0910 17:30:18.150612   13790 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5970/.minikube/machines/addons-447248/id_rsa Username:docker}
	I0910 17:30:18.152079   13790 main.go:141] libmachine: (addons-447248) Calling .DriverName
	I0910 17:30:18.153393   13790 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39011
	I0910 17:30:18.153809   13790 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:18.154206   13790 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0910 17:30:18.154981   13790 main.go:141] libmachine: Using API Version  1
	I0910 17:30:18.154998   13790 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:18.155335   13790 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:18.155488   13790 main.go:141] libmachine: (addons-447248) Calling .GetState
	I0910 17:30:18.155569   13790 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0910 17:30:18.155582   13790 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0910 17:30:18.155605   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHHostname
	I0910 17:30:18.159055   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:30:18.159110   13790 main.go:141] libmachine: (addons-447248) Calling .DriverName
	I0910 17:30:18.159991   13790 main.go:141] libmachine: (addons-447248) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:d3:4b", ip: ""} in network mk-addons-447248: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:39 +0000 UTC Type:0 Mac:52:54:00:af:d3:4b Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:addons-447248 Clientid:01:52:54:00:af:d3:4b}
	I0910 17:30:18.160017   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined IP address 192.168.39.59 and MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:30:18.160217   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHPort
	I0910 17:30:18.160412   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHKeyPath
	I0910 17:30:18.160639   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHUsername
	I0910 17:30:18.160788   13790 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5970/.minikube/machines/addons-447248/id_rsa Username:docker}
	I0910 17:30:18.161894   13790 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0910 17:30:18.163515   13790 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0910 17:30:18.163535   13790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0910 17:30:18.163559   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHHostname
	I0910 17:30:18.167014   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:30:18.171073   13790 main.go:141] libmachine: (addons-447248) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:d3:4b", ip: ""} in network mk-addons-447248: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:39 +0000 UTC Type:0 Mac:52:54:00:af:d3:4b Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:addons-447248 Clientid:01:52:54:00:af:d3:4b}
	I0910 17:30:18.171107   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined IP address 192.168.39.59 and MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:30:18.171330   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHPort
	I0910 17:30:18.171556   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHKeyPath
	I0910 17:30:18.171774   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHUsername
	I0910 17:30:18.171935   13790 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5970/.minikube/machines/addons-447248/id_rsa Username:docker}
	I0910 17:30:18.171969   13790 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33077
	I0910 17:30:18.172149   13790 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37061
	I0910 17:30:18.172648   13790 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:18.172825   13790 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:18.173350   13790 main.go:141] libmachine: Using API Version  1
	I0910 17:30:18.173374   13790 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:18.173918   13790 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:18.174014   13790 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40273
	I0910 17:30:18.174373   13790 main.go:141] libmachine: (addons-447248) Calling .GetState
	I0910 17:30:18.174656   13790 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:18.175302   13790 main.go:141] libmachine: Using API Version  1
	I0910 17:30:18.175367   13790 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:18.176000   13790 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:18.176681   13790 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0910 17:30:18.176725   13790 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:18.177161   13790 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44705
	I0910 17:30:18.177584   13790 main.go:141] libmachine: Using API Version  1
	I0910 17:30:18.177613   13790 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:18.177862   13790 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:18.178052   13790 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:18.178834   13790 main.go:141] libmachine: (addons-447248) Calling .DriverName
	I0910 17:30:18.178933   13790 main.go:141] libmachine: Using API Version  1
	I0910 17:30:18.178948   13790 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:18.179305   13790 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0910 17:30:18.179367   13790 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:18.179654   13790 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:18.180676   13790 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44795
	I0910 17:30:18.180852   13790 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38813
	I0910 17:30:18.180924   13790 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0910 17:30:18.181053   13790 main.go:141] libmachine: (addons-447248) Calling .GetState
	I0910 17:30:18.181351   13790 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:18.181898   13790 main.go:141] libmachine: Using API Version  1
	I0910 17:30:18.181917   13790 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:18.181981   13790 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:18.182463   13790 main.go:141] libmachine: Using API Version  1
	I0910 17:30:18.182478   13790 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:18.182531   13790 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:18.182624   13790 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0910 17:30:18.182643   13790 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0910 17:30:18.182661   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHHostname
	I0910 17:30:18.182790   13790 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:18.182901   13790 main.go:141] libmachine: (addons-447248) Calling .GetState
	I0910 17:30:18.182922   13790 main.go:141] libmachine: (addons-447248) Calling .GetState
	I0910 17:30:18.182948   13790 main.go:141] libmachine: (addons-447248) Calling .DriverName
	I0910 17:30:18.184653   13790 main.go:141] libmachine: (addons-447248) Calling .DriverName
	I0910 17:30:18.185069   13790 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0910 17:30:18.185925   13790 main.go:141] libmachine: (addons-447248) Calling .DriverName
	I0910 17:30:18.186523   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:30:18.186985   13790 main.go:141] libmachine: (addons-447248) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:d3:4b", ip: ""} in network mk-addons-447248: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:39 +0000 UTC Type:0 Mac:52:54:00:af:d3:4b Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:addons-447248 Clientid:01:52:54:00:af:d3:4b}
	I0910 17:30:18.187006   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined IP address 192.168.39.59 and MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:30:18.187231   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHPort
	I0910 17:30:18.187372   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHKeyPath
	I0910 17:30:18.187525   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHUsername
	I0910 17:30:18.187622   13790 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5970/.minikube/machines/addons-447248/id_rsa Username:docker}
	I0910 17:30:18.187730   13790 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0910 17:30:18.187828   13790 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0910 17:30:18.187842   13790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0910 17:30:18.187859   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHHostname
	I0910 17:30:18.188473   13790 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0910 17:30:18.190214   13790 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0910 17:30:18.190294   13790 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0910 17:30:18.190307   13790 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0910 17:30:18.190329   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHHostname
	I0910 17:30:18.192020   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:30:18.192672   13790 main.go:141] libmachine: (addons-447248) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:d3:4b", ip: ""} in network mk-addons-447248: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:39 +0000 UTC Type:0 Mac:52:54:00:af:d3:4b Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:addons-447248 Clientid:01:52:54:00:af:d3:4b}
	I0910 17:30:18.192799   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined IP address 192.168.39.59 and MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:30:18.192943   13790 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0910 17:30:18.193133   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHPort
	I0910 17:30:18.193338   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHKeyPath
	I0910 17:30:18.193559   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHUsername
	I0910 17:30:18.193609   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:30:18.193812   13790 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5970/.minikube/machines/addons-447248/id_rsa Username:docker}
	I0910 17:30:18.194136   13790 main.go:141] libmachine: (addons-447248) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:d3:4b", ip: ""} in network mk-addons-447248: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:39 +0000 UTC Type:0 Mac:52:54:00:af:d3:4b Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:addons-447248 Clientid:01:52:54:00:af:d3:4b}
	I0910 17:30:18.194160   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined IP address 192.168.39.59 and MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:30:18.194314   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHPort
	I0910 17:30:18.194331   13790 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42907
	I0910 17:30:18.194736   13790 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0910 17:30:18.194757   13790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0910 17:30:18.194774   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHHostname
	I0910 17:30:18.196058   13790 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45847
	I0910 17:30:18.196093   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHKeyPath
	I0910 17:30:18.196329   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHUsername
	I0910 17:30:18.196499   13790 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5970/.minikube/machines/addons-447248/id_rsa Username:docker}
	I0910 17:30:18.196554   13790 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:18.197057   13790 main.go:141] libmachine: Using API Version  1
	I0910 17:30:18.197077   13790 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:18.197496   13790 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:18.197538   13790 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:18.197785   13790 main.go:141] libmachine: (addons-447248) Calling .GetState
	I0910 17:30:18.199055   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:30:18.199283   13790 main.go:141] libmachine: Using API Version  1
	I0910 17:30:18.199305   13790 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:18.199472   13790 main.go:141] libmachine: (addons-447248) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:d3:4b", ip: ""} in network mk-addons-447248: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:39 +0000 UTC Type:0 Mac:52:54:00:af:d3:4b Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:addons-447248 Clientid:01:52:54:00:af:d3:4b}
	I0910 17:30:18.199488   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined IP address 192.168.39.59 and MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:30:18.199634   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHPort
	I0910 17:30:18.199913   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHKeyPath
	I0910 17:30:18.199953   13790 main.go:141] libmachine: (addons-447248) Calling .DriverName
	I0910 17:30:18.200253   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHUsername
	I0910 17:30:18.200429   13790 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5970/.minikube/machines/addons-447248/id_rsa Username:docker}
	I0910 17:30:18.202295   13790 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 17:30:18.203101   13790 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:18.203336   13790 main.go:141] libmachine: (addons-447248) Calling .GetState
	I0910 17:30:18.204071   13790 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0910 17:30:18.204088   13790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0910 17:30:18.204105   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHHostname
	I0910 17:30:18.204389   13790 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40043
	I0910 17:30:18.204729   13790 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:18.205171   13790 main.go:141] libmachine: (addons-447248) Calling .DriverName
	I0910 17:30:18.205350   13790 main.go:141] libmachine: Using API Version  1
	I0910 17:30:18.205372   13790 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:18.205864   13790 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:18.206070   13790 main.go:141] libmachine: (addons-447248) Calling .GetState
	I0910 17:30:18.206990   13790 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0910 17:30:18.207828   13790 main.go:141] libmachine: (addons-447248) Calling .DriverName
	I0910 17:30:18.208174   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:30:18.208567   13790 main.go:141] libmachine: (addons-447248) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:d3:4b", ip: ""} in network mk-addons-447248: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:39 +0000 UTC Type:0 Mac:52:54:00:af:d3:4b Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:addons-447248 Clientid:01:52:54:00:af:d3:4b}
	I0910 17:30:18.208593   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined IP address 192.168.39.59 and MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:30:18.208817   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHPort
	I0910 17:30:18.209437   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHKeyPath
	I0910 17:30:18.209583   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHUsername
	I0910 17:30:18.209770   13790 out.go:177]   - Using image docker.io/busybox:stable
	I0910 17:30:18.209735   13790 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5970/.minikube/machines/addons-447248/id_rsa Username:docker}
	I0910 17:30:18.210839   13790 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0910 17:30:18.212624   13790 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0910 17:30:18.213194   13790 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44667
	I0910 17:30:18.213618   13790 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:18.213751   13790 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0910 17:30:18.213857   13790 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0910 17:30:18.213876   13790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0910 17:30:18.213899   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHHostname
	I0910 17:30:18.214557   13790 main.go:141] libmachine: Using API Version  1
	I0910 17:30:18.214581   13790 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:18.215466   13790 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:18.215657   13790 main.go:141] libmachine: (addons-447248) Calling .GetState
	I0910 17:30:18.216850   13790 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0910 17:30:18.216961   13790 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36205
	I0910 17:30:18.217491   13790 main.go:141] libmachine: (addons-447248) Calling .DriverName
	I0910 17:30:18.217559   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:30:18.217931   13790 main.go:141] libmachine: (addons-447248) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:d3:4b", ip: ""} in network mk-addons-447248: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:39 +0000 UTC Type:0 Mac:52:54:00:af:d3:4b Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:addons-447248 Clientid:01:52:54:00:af:d3:4b}
	I0910 17:30:18.218088   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined IP address 192.168.39.59 and MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:30:18.218126   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHPort
	I0910 17:30:18.218280   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHKeyPath
	I0910 17:30:18.218414   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHUsername
	I0910 17:30:18.218977   13790 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39967
	I0910 17:30:18.218683   13790 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5970/.minikube/machines/addons-447248/id_rsa Username:docker}
	I0910 17:30:18.219536   13790 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0910 17:30:18.220426   13790 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0910 17:30:18.221389   13790 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0910 17:30:18.221407   13790 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0910 17:30:18.221431   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHHostname
	I0910 17:30:18.223573   13790 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0910 17:30:18.223787   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:30:18.224092   13790 main.go:141] libmachine: (addons-447248) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:d3:4b", ip: ""} in network mk-addons-447248: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:39 +0000 UTC Type:0 Mac:52:54:00:af:d3:4b Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:addons-447248 Clientid:01:52:54:00:af:d3:4b}
	I0910 17:30:18.224119   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined IP address 192.168.39.59 and MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:30:18.224249   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHPort
	I0910 17:30:18.224426   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHKeyPath
	I0910 17:30:18.224590   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHUsername
	I0910 17:30:18.224718   13790 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5970/.minikube/machines/addons-447248/id_rsa Username:docker}
	I0910 17:30:18.226043   13790 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0910 17:30:18.227390   13790 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0910 17:30:18.228666   13790 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0910 17:30:18.228685   13790 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0910 17:30:18.228704   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHHostname
	I0910 17:30:18.231240   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:30:18.231558   13790 main.go:141] libmachine: (addons-447248) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:d3:4b", ip: ""} in network mk-addons-447248: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:39 +0000 UTC Type:0 Mac:52:54:00:af:d3:4b Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:addons-447248 Clientid:01:52:54:00:af:d3:4b}
	I0910 17:30:18.231583   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined IP address 192.168.39.59 and MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:30:18.231759   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHPort
	I0910 17:30:18.231927   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHKeyPath
	I0910 17:30:18.232058   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHUsername
	I0910 17:30:18.232183   13790 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5970/.minikube/machines/addons-447248/id_rsa Username:docker}
	I0910 17:30:18.251092   13790 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:18.251253   13790 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:18.251673   13790 main.go:141] libmachine: Using API Version  1
	I0910 17:30:18.251693   13790 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:18.251799   13790 main.go:141] libmachine: Using API Version  1
	I0910 17:30:18.251821   13790 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:18.252033   13790 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:18.252116   13790 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:18.252224   13790 main.go:141] libmachine: (addons-447248) Calling .GetState
	I0910 17:30:18.252318   13790 main.go:141] libmachine: (addons-447248) Calling .GetState
	I0910 17:30:18.254085   13790 main.go:141] libmachine: (addons-447248) Calling .DriverName
	I0910 17:30:18.254425   13790 main.go:141] libmachine: (addons-447248) Calling .DriverName
	I0910 17:30:18.254708   13790 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0910 17:30:18.254722   13790 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0910 17:30:18.254739   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHHostname
	I0910 17:30:18.256283   13790 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0910 17:30:18.257283   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:30:18.257678   13790 main.go:141] libmachine: (addons-447248) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:d3:4b", ip: ""} in network mk-addons-447248: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:39 +0000 UTC Type:0 Mac:52:54:00:af:d3:4b Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:addons-447248 Clientid:01:52:54:00:af:d3:4b}
	I0910 17:30:18.257706   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined IP address 192.168.39.59 and MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:30:18.257891   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHPort
	I0910 17:30:18.258076   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHKeyPath
	I0910 17:30:18.258216   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHUsername
	I0910 17:30:18.258348   13790 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5970/.minikube/machines/addons-447248/id_rsa Username:docker}
	I0910 17:30:18.258892   13790 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	W0910 17:30:18.259881   13790 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:40648->192.168.39.59:22: read: connection reset by peer
	I0910 17:30:18.259907   13790 retry.go:31] will retry after 259.218014ms: ssh: handshake failed: read tcp 192.168.39.1:40648->192.168.39.59:22: read: connection reset by peer
	I0910 17:30:18.261861   13790 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0910 17:30:18.263642   13790 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0910 17:30:18.263665   13790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0910 17:30:18.263691   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHHostname
	I0910 17:30:18.266832   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:30:18.267267   13790 main.go:141] libmachine: (addons-447248) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:d3:4b", ip: ""} in network mk-addons-447248: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:39 +0000 UTC Type:0 Mac:52:54:00:af:d3:4b Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:addons-447248 Clientid:01:52:54:00:af:d3:4b}
	I0910 17:30:18.267295   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined IP address 192.168.39.59 and MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:30:18.267512   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHPort
	I0910 17:30:18.267701   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHKeyPath
	I0910 17:30:18.267925   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHUsername
	I0910 17:30:18.268063   13790 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5970/.minikube/machines/addons-447248/id_rsa Username:docker}
	W0910 17:30:18.274242   13790 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:40664->192.168.39.59:22: read: connection reset by peer
	I0910 17:30:18.274281   13790 retry.go:31] will retry after 134.685819ms: ssh: handshake failed: read tcp 192.168.39.1:40664->192.168.39.59:22: read: connection reset by peer
	W0910 17:30:18.411173   13790 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:40680->192.168.39.59:22: read: connection reset by peer
	I0910 17:30:18.411205   13790 retry.go:31] will retry after 268.314688ms: ssh: handshake failed: read tcp 192.168.39.1:40680->192.168.39.59:22: read: connection reset by peer
	I0910 17:30:18.529728   13790 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0910 17:30:18.529751   13790 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0910 17:30:18.566399   13790 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0910 17:30:18.566420   13790 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0910 17:30:18.588104   13790 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0910 17:30:18.588128   13790 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0910 17:30:18.710623   13790 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0910 17:30:18.710643   13790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0910 17:30:18.716633   13790 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0910 17:30:18.716654   13790 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0910 17:30:18.758911   13790 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0910 17:30:18.758941   13790 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0910 17:30:18.796735   13790 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0910 17:30:18.800765   13790 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0910 17:30:18.842879   13790 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0910 17:30:18.842909   13790 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0910 17:30:18.843398   13790 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0910 17:30:18.918877   13790 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0910 17:30:18.918876   13790 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0910 17:30:18.918935   13790 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0910 17:30:18.919375   13790 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0910 17:30:18.939980   13790 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0910 17:30:18.940008   13790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0910 17:30:18.966727   13790 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0910 17:30:18.966759   13790 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0910 17:30:18.975021   13790 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0910 17:30:18.975048   13790 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0910 17:30:18.998977   13790 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0910 17:30:19.178273   13790 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0910 17:30:19.178300   13790 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0910 17:30:19.194145   13790 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0910 17:30:19.194168   13790 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0910 17:30:19.238041   13790 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0910 17:30:19.238078   13790 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0910 17:30:19.242611   13790 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0910 17:30:19.248670   13790 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0910 17:30:19.316414   13790 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0910 17:30:19.316452   13790 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0910 17:30:19.374490   13790 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0910 17:30:19.374515   13790 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0910 17:30:19.428878   13790 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0910 17:30:19.679082   13790 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0910 17:30:19.679113   13790 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0910 17:30:19.721726   13790 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0910 17:30:19.721757   13790 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0910 17:30:19.777450   13790 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0910 17:30:19.777489   13790 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0910 17:30:20.008631   13790 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0910 17:30:20.008657   13790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0910 17:30:20.072109   13790 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0910 17:30:20.072139   13790 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0910 17:30:20.157459   13790 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0910 17:30:20.182154   13790 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0910 17:30:20.182186   13790 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0910 17:30:20.195107   13790 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0910 17:30:20.209364   13790 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0910 17:30:20.209399   13790 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0910 17:30:20.320108   13790 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0910 17:30:20.376439   13790 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0910 17:30:20.376504   13790 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0910 17:30:20.407789   13790 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0910 17:30:20.407822   13790 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0910 17:30:20.441740   13790 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0910 17:30:20.441764   13790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0910 17:30:20.534951   13790 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0910 17:30:20.534978   13790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0910 17:30:20.719882   13790 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0910 17:30:20.719915   13790 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0910 17:30:20.893723   13790 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0910 17:30:20.962612   13790 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0910 17:30:20.962646   13790 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0910 17:30:20.965581   13790 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0910 17:30:20.965608   13790 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0910 17:30:21.242715   13790 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0910 17:30:21.242748   13790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0910 17:30:21.307560   13790 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0910 17:30:21.307591   13790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0910 17:30:21.465678   13790 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0910 17:30:21.465707   13790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0910 17:30:21.574120   13790 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0910 17:30:21.726361   13790 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0910 17:30:21.726386   13790 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0910 17:30:21.823454   13790 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.293670376s)
	I0910 17:30:21.823473   13790 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.293708554s)
	I0910 17:30:21.823491   13790 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0910 17:30:21.824392   13790 node_ready.go:35] waiting up to 6m0s for node "addons-447248" to be "Ready" ...
	I0910 17:30:21.867483   13790 node_ready.go:49] node "addons-447248" has status "Ready":"True"
	I0910 17:30:21.867526   13790 node_ready.go:38] duration metric: took 43.099686ms for node "addons-447248" to be "Ready" ...
	I0910 17:30:21.867537   13790 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 17:30:21.901498   13790 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-2d7wv" in "kube-system" namespace to be "Ready" ...
	I0910 17:30:22.107142   13790 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0910 17:30:22.375697   13790 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-447248" context rescaled to 1 replicas
	I0910 17:30:23.979660   13790 pod_ready.go:103] pod "coredns-6f6b679f8f-2d7wv" in "kube-system" namespace has status "Ready":"False"
	I0910 17:30:24.257364   13790 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.456556998s)
	I0910 17:30:24.257417   13790 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:24.257430   13790 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.460663061s)
	I0910 17:30:24.257481   13790 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:24.257493   13790 main.go:141] libmachine: (addons-447248) Calling .Close
	I0910 17:30:24.257446   13790 main.go:141] libmachine: (addons-447248) Calling .Close
	I0910 17:30:24.257808   13790 main.go:141] libmachine: (addons-447248) DBG | Closing plugin on server side
	I0910 17:30:24.257856   13790 main.go:141] libmachine: (addons-447248) DBG | Closing plugin on server side
	I0910 17:30:24.257834   13790 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:24.257834   13790 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:24.257885   13790 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:24.257891   13790 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:24.257902   13790 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:24.257913   13790 main.go:141] libmachine: (addons-447248) Calling .Close
	I0910 17:30:24.257903   13790 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:24.257972   13790 main.go:141] libmachine: (addons-447248) Calling .Close
	I0910 17:30:24.258141   13790 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:24.258163   13790 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:24.258327   13790 main.go:141] libmachine: (addons-447248) DBG | Closing plugin on server side
	I0910 17:30:24.258332   13790 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:24.258345   13790 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:25.178823   13790 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0910 17:30:25.178857   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHHostname
	I0910 17:30:25.182280   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:30:25.182751   13790 main.go:141] libmachine: (addons-447248) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:d3:4b", ip: ""} in network mk-addons-447248: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:39 +0000 UTC Type:0 Mac:52:54:00:af:d3:4b Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:addons-447248 Clientid:01:52:54:00:af:d3:4b}
	I0910 17:30:25.182786   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined IP address 192.168.39.59 and MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:30:25.182994   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHPort
	I0910 17:30:25.183205   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHKeyPath
	I0910 17:30:25.183338   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHUsername
	I0910 17:30:25.183477   13790 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5970/.minikube/machines/addons-447248/id_rsa Username:docker}
	I0910 17:30:25.836504   13790 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0910 17:30:26.157801   13790 addons.go:234] Setting addon gcp-auth=true in "addons-447248"
	I0910 17:30:26.157859   13790 host.go:66] Checking if "addons-447248" exists ...
	I0910 17:30:26.158162   13790 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0910 17:30:26.158190   13790 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:26.173958   13790 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41043
	I0910 17:30:26.174332   13790 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:26.174853   13790 main.go:141] libmachine: Using API Version  1
	I0910 17:30:26.174876   13790 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:26.175235   13790 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:26.175751   13790 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0910 17:30:26.175793   13790 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:30:26.191441   13790 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38613
	I0910 17:30:26.191897   13790 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:30:26.192362   13790 main.go:141] libmachine: Using API Version  1
	I0910 17:30:26.192387   13790 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:30:26.192776   13790 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:30:26.193049   13790 main.go:141] libmachine: (addons-447248) Calling .GetState
	I0910 17:30:26.194915   13790 main.go:141] libmachine: (addons-447248) Calling .DriverName
	I0910 17:30:26.195146   13790 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0910 17:30:26.195162   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHHostname
	I0910 17:30:26.198187   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:30:26.198729   13790 main.go:141] libmachine: (addons-447248) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:d3:4b", ip: ""} in network mk-addons-447248: {Iface:virbr1 ExpiryTime:2024-09-10 18:29:39 +0000 UTC Type:0 Mac:52:54:00:af:d3:4b Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:addons-447248 Clientid:01:52:54:00:af:d3:4b}
	I0910 17:30:26.198759   13790 main.go:141] libmachine: (addons-447248) DBG | domain addons-447248 has defined IP address 192.168.39.59 and MAC address 52:54:00:af:d3:4b in network mk-addons-447248
	I0910 17:30:26.198963   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHPort
	I0910 17:30:26.199173   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHKeyPath
	I0910 17:30:26.199342   13790 main.go:141] libmachine: (addons-447248) Calling .GetSSHUsername
	I0910 17:30:26.199506   13790 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5970/.minikube/machines/addons-447248/id_rsa Username:docker}
	I0910 17:30:26.455793   13790 pod_ready.go:103] pod "coredns-6f6b679f8f-2d7wv" in "kube-system" namespace has status "Ready":"False"
	I0910 17:30:28.984994   13790 pod_ready.go:103] pod "coredns-6f6b679f8f-2d7wv" in "kube-system" namespace has status "Ready":"False"
	I0910 17:30:29.071371   13790 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (10.227937774s)
	I0910 17:30:29.071439   13790 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:29.071384   13790 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (10.15246585s)
	I0910 17:30:29.071449   13790 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (10.152040132s)
	I0910 17:30:29.071454   13790 main.go:141] libmachine: (addons-447248) Calling .Close
	I0910 17:30:29.071495   13790 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:29.071501   13790 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (9.828855759s)
	I0910 17:30:29.071512   13790 main.go:141] libmachine: (addons-447248) Calling .Close
	I0910 17:30:29.071518   13790 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:29.071526   13790 main.go:141] libmachine: (addons-447248) Calling .Close
	I0910 17:30:29.071543   13790 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (9.822845493s)
	I0910 17:30:29.071458   13790 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (10.072454958s)
	I0910 17:30:29.071582   13790 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:29.071590   13790 main.go:141] libmachine: (addons-447248) Calling .Close
	I0910 17:30:29.071601   13790 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.642699785s)
	I0910 17:30:29.071489   13790 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:29.071621   13790 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:29.071626   13790 main.go:141] libmachine: (addons-447248) Calling .Close
	I0910 17:30:29.071639   13790 main.go:141] libmachine: (addons-447248) Calling .Close
	I0910 17:30:29.071775   13790 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:29.071792   13790 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:29.071803   13790 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:29.071813   13790 main.go:141] libmachine: (addons-447248) Calling .Close
	I0910 17:30:29.071926   13790 main.go:141] libmachine: (addons-447248) DBG | Closing plugin on server side
	I0910 17:30:29.071961   13790 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:29.071963   13790 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:29.071968   13790 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:29.071975   13790 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:29.071980   13790 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:29.071983   13790 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:29.071987   13790 main.go:141] libmachine: (addons-447248) Calling .Close
	I0910 17:30:29.071991   13790 main.go:141] libmachine: (addons-447248) Calling .Close
	I0910 17:30:29.072230   13790 main.go:141] libmachine: (addons-447248) DBG | Closing plugin on server side
	I0910 17:30:29.072253   13790 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:29.072260   13790 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:29.073764   13790 main.go:141] libmachine: (addons-447248) DBG | Closing plugin on server side
	I0910 17:30:29.073849   13790 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:29.073858   13790 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:29.073868   13790 addons.go:475] Verifying addon ingress=true in "addons-447248"
	I0910 17:30:29.073923   13790 main.go:141] libmachine: (addons-447248) DBG | Closing plugin on server side
	I0910 17:30:29.074044   13790 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:29.074072   13790 main.go:141] libmachine: (addons-447248) DBG | Closing plugin on server side
	I0910 17:30:29.074086   13790 main.go:141] libmachine: (addons-447248) Calling .Close
	I0910 17:30:29.074092   13790 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:29.074101   13790 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:29.074116   13790 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:29.074122   13790 main.go:141] libmachine: (addons-447248) Calling .Close
	I0910 17:30:29.074142   13790 main.go:141] libmachine: (addons-447248) DBG | Closing plugin on server side
	I0910 17:30:29.074142   13790 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:29.074163   13790 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:29.074166   13790 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:29.074173   13790 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:29.074194   13790 main.go:141] libmachine: (addons-447248) Calling .Close
	I0910 17:30:29.074174   13790 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:29.074247   13790 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:29.074256   13790 main.go:141] libmachine: (addons-447248) Calling .Close
	I0910 17:30:29.074350   13790 main.go:141] libmachine: (addons-447248) DBG | Closing plugin on server side
	I0910 17:30:29.074375   13790 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:29.074390   13790 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:29.074413   13790 main.go:141] libmachine: (addons-447248) DBG | Closing plugin on server side
	I0910 17:30:29.074436   13790 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:29.074443   13790 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:29.074451   13790 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:29.074458   13790 main.go:141] libmachine: (addons-447248) Calling .Close
	I0910 17:30:29.074564   13790 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:29.074573   13790 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:29.074756   13790 main.go:141] libmachine: (addons-447248) DBG | Closing plugin on server side
	I0910 17:30:29.074778   13790 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:29.074784   13790 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:29.074861   13790 main.go:141] libmachine: (addons-447248) DBG | Closing plugin on server side
	I0910 17:30:29.074886   13790 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:29.074893   13790 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:29.074901   13790 addons.go:475] Verifying addon registry=true in "addons-447248"
	I0910 17:30:29.073973   13790 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:29.075507   13790 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:29.077126   13790 out.go:177] * Verifying registry addon...
	I0910 17:30:29.077154   13790 out.go:177] * Verifying ingress addon...
	I0910 17:30:29.079213   13790 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0910 17:30:29.079343   13790 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0910 17:30:29.097606   13790 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0910 17:30:29.097638   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:29.098166   13790 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0910 17:30:29.098193   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:29.122916   13790 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:29.122939   13790 main.go:141] libmachine: (addons-447248) Calling .Close
	I0910 17:30:29.123235   13790 main.go:141] libmachine: (addons-447248) DBG | Closing plugin on server side
	I0910 17:30:29.123262   13790 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:29.123278   13790 main.go:141] libmachine: Making call to close connection to plugin binary
	W0910 17:30:29.123367   13790 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0910 17:30:29.153015   13790 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:29.153040   13790 main.go:141] libmachine: (addons-447248) Calling .Close
	I0910 17:30:29.153448   13790 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:29.153468   13790 main.go:141] libmachine: (addons-447248) DBG | Closing plugin on server side
	I0910 17:30:29.153478   13790 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:29.618347   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:29.618635   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:30.127101   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:30.127263   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:30.731191   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:30.807430   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:30.976046   13790 pod_ready.go:98] pod "coredns-6f6b679f8f-2d7wv" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-10 17:30:30 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-10 17:30:18 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-10 17:30:18 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-10 17:30:18 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-10 17:30:18 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.59 HostIPs:[{IP:192.168.39.
59}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-09-10 17:30:18 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-10 17:30:19 +0000 UTC,FinishedAt:2024-09-10 17:30:29 +0000 UTC,ContainerID:docker://5cf87a0727d6b03bd2fdb43f01e0c515fc3e9c85bc079076d4d0f7d1053e9e93,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1 ContainerID:docker://5cf87a0727d6b03bd2fdb43f01e0c515fc3e9c85bc079076d4d0f7d1053e9e93 Started:0xc0003bd530 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc000441c70} {Name:kube-api-access-kq87t MountPath:/var/run/secrets/kubernetes.io/serviceaccou
nt ReadOnly:true RecursiveReadOnly:0xc000441c80}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0910 17:30:30.976077   13790 pod_ready.go:82] duration metric: took 9.074542445s for pod "coredns-6f6b679f8f-2d7wv" in "kube-system" namespace to be "Ready" ...
	E0910 17:30:30.976090   13790 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-6f6b679f8f-2d7wv" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-10 17:30:30 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-10 17:30:18 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-10 17:30:18 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-10 17:30:18 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-10 17:30:18 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.59 HostIPs:[{IP:192.168.39.59}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-09-10 17:30:18 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-10 17:30:19 +0000 UTC,FinishedAt:2024-09-10 17:30:29 +0000 UTC,ContainerID:docker://5cf87a0727d6b03bd2fdb43f01e0c515fc3e9c85bc079076d4d0f7d1053e9e93,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1 ContainerID:docker://5cf87a0727d6b03bd2fdb43f01e0c515fc3e9c85bc079076d4d0f7d1053e9e93 Started:0xc0003bd530 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc000441c70} {Name:kube-api-access-kq87t MountPath:/var/run/secre
ts/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc000441c80}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0910 17:30:30.976103   13790 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-954g7" in "kube-system" namespace to be "Ready" ...
	I0910 17:30:31.045470   13790 pod_ready.go:93] pod "coredns-6f6b679f8f-954g7" in "kube-system" namespace has status "Ready":"True"
	I0910 17:30:31.045499   13790 pod_ready.go:82] duration metric: took 69.382866ms for pod "coredns-6f6b679f8f-954g7" in "kube-system" namespace to be "Ready" ...
	I0910 17:30:31.045516   13790 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-447248" in "kube-system" namespace to be "Ready" ...
	I0910 17:30:31.105687   13790 pod_ready.go:93] pod "etcd-addons-447248" in "kube-system" namespace has status "Ready":"True"
	I0910 17:30:31.105714   13790 pod_ready.go:82] duration metric: took 60.187703ms for pod "etcd-addons-447248" in "kube-system" namespace to be "Ready" ...
	I0910 17:30:31.105726   13790 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-447248" in "kube-system" namespace to be "Ready" ...
	I0910 17:30:31.116229   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:31.116578   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:31.155969   13790 pod_ready.go:93] pod "kube-apiserver-addons-447248" in "kube-system" namespace has status "Ready":"True"
	I0910 17:30:31.156002   13790 pod_ready.go:82] duration metric: took 50.264915ms for pod "kube-apiserver-addons-447248" in "kube-system" namespace to be "Ready" ...
	I0910 17:30:31.156016   13790 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-447248" in "kube-system" namespace to be "Ready" ...
	I0910 17:30:31.193058   13790 pod_ready.go:93] pod "kube-controller-manager-addons-447248" in "kube-system" namespace has status "Ready":"True"
	I0910 17:30:31.193082   13790 pod_ready.go:82] duration metric: took 37.056062ms for pod "kube-controller-manager-addons-447248" in "kube-system" namespace to be "Ready" ...
	I0910 17:30:31.193096   13790 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-r6wh7" in "kube-system" namespace to be "Ready" ...
	I0910 17:30:31.307879   13790 pod_ready.go:93] pod "kube-proxy-r6wh7" in "kube-system" namespace has status "Ready":"True"
	I0910 17:30:31.307973   13790 pod_ready.go:82] duration metric: took 114.857747ms for pod "kube-proxy-r6wh7" in "kube-system" namespace to be "Ready" ...
	I0910 17:30:31.308002   13790 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-447248" in "kube-system" namespace to be "Ready" ...
	I0910 17:30:31.607560   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:31.610044   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:31.771409   13790 pod_ready.go:93] pod "kube-scheduler-addons-447248" in "kube-system" namespace has status "Ready":"True"
	I0910 17:30:31.771437   13790 pod_ready.go:82] duration metric: took 463.421035ms for pod "kube-scheduler-addons-447248" in "kube-system" namespace to be "Ready" ...
	I0910 17:30:31.771450   13790 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-zwwn8" in "kube-system" namespace to be "Ready" ...
	I0910 17:30:31.776470   13790 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (11.618977112s)
	I0910 17:30:31.776537   13790 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:31.776549   13790 main.go:141] libmachine: (addons-447248) Calling .Close
	I0910 17:30:31.776581   13790 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (11.581438774s)
	I0910 17:30:31.776620   13790 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:31.776638   13790 main.go:141] libmachine: (addons-447248) Calling .Close
	I0910 17:30:31.776701   13790 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (11.456554413s)
	W0910 17:30:31.776742   13790 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0910 17:30:31.776822   13790 retry.go:31] will retry after 240.400075ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0910 17:30:31.776827   13790 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (10.202663974s)
	I0910 17:30:31.776879   13790 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:31.776897   13790 main.go:141] libmachine: (addons-447248) Calling .Close
	I0910 17:30:31.776756   13790 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (10.882994852s)
	I0910 17:30:31.776950   13790 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:31.776964   13790 main.go:141] libmachine: (addons-447248) Calling .Close
	I0910 17:30:31.777043   13790 main.go:141] libmachine: (addons-447248) DBG | Closing plugin on server side
	I0910 17:30:31.777067   13790 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:31.777163   13790 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:31.777182   13790 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:31.777194   13790 main.go:141] libmachine: (addons-447248) Calling .Close
	I0910 17:30:31.777245   13790 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:31.777255   13790 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:31.777263   13790 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:31.777270   13790 main.go:141] libmachine: (addons-447248) Calling .Close
	I0910 17:30:31.777086   13790 main.go:141] libmachine: (addons-447248) DBG | Closing plugin on server side
	I0910 17:30:31.777093   13790 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:31.777339   13790 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:31.777348   13790 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:31.777355   13790 main.go:141] libmachine: (addons-447248) Calling .Close
	I0910 17:30:31.777115   13790 main.go:141] libmachine: (addons-447248) DBG | Closing plugin on server side
	I0910 17:30:31.778733   13790 main.go:141] libmachine: (addons-447248) DBG | Closing plugin on server side
	I0910 17:30:31.778747   13790 main.go:141] libmachine: (addons-447248) DBG | Closing plugin on server side
	I0910 17:30:31.778752   13790 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:31.778768   13790 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:31.778779   13790 addons.go:475] Verifying addon metrics-server=true in "addons-447248"
	I0910 17:30:31.778780   13790 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:31.778791   13790 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:31.778799   13790 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:31.778807   13790 main.go:141] libmachine: (addons-447248) Calling .Close
	I0910 17:30:31.778833   13790 main.go:141] libmachine: (addons-447248) DBG | Closing plugin on server side
	I0910 17:30:31.778832   13790 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:31.778845   13790 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:31.779137   13790 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:31.779153   13790 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:31.779159   13790 main.go:141] libmachine: (addons-447248) DBG | Closing plugin on server side
	I0910 17:30:31.779826   13790 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:31.779841   13790 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:31.779865   13790 main.go:141] libmachine: (addons-447248) DBG | Closing plugin on server side
	I0910 17:30:31.780919   13790 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-447248 service yakd-dashboard -n yakd-dashboard
	
	I0910 17:30:32.017901   13790 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0910 17:30:32.277943   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:32.278755   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:32.624476   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:32.624862   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:32.940671   13790 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (10.833464462s)
	I0910 17:30:32.940710   13790 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (6.745541441s)
	I0910 17:30:32.940727   13790 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:32.940743   13790 main.go:141] libmachine: (addons-447248) Calling .Close
	I0910 17:30:32.941054   13790 main.go:141] libmachine: (addons-447248) DBG | Closing plugin on server side
	I0910 17:30:32.941121   13790 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:32.941144   13790 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:32.941164   13790 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:32.941176   13790 main.go:141] libmachine: (addons-447248) Calling .Close
	I0910 17:30:32.941481   13790 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:32.941518   13790 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:32.941533   13790 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-447248"
	I0910 17:30:32.942612   13790 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0910 17:30:32.943638   13790 out.go:177] * Verifying csi-hostpath-driver addon...
	I0910 17:30:32.945487   13790 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0910 17:30:32.946214   13790 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0910 17:30:32.947137   13790 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0910 17:30:32.947156   13790 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0910 17:30:32.955048   13790 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0910 17:30:32.955070   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:33.046689   13790 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0910 17:30:33.046716   13790 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0910 17:30:33.087485   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:33.087856   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:33.121122   13790 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0910 17:30:33.121142   13790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0910 17:30:33.216876   13790 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0910 17:30:33.458354   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:33.587046   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:33.590073   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:33.778512   13790 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-zwwn8" in "kube-system" namespace has status "Ready":"False"
	I0910 17:30:33.952395   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:34.088144   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:34.090004   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:34.194524   13790 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.176556237s)
	I0910 17:30:34.194605   13790 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:34.194628   13790 main.go:141] libmachine: (addons-447248) Calling .Close
	I0910 17:30:34.194945   13790 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:34.194963   13790 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:34.194980   13790 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:34.194991   13790 main.go:141] libmachine: (addons-447248) Calling .Close
	I0910 17:30:34.195244   13790 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:34.195264   13790 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:34.456455   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:34.600661   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:34.692689   13790 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.475760104s)
	I0910 17:30:34.692738   13790 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:34.692754   13790 main.go:141] libmachine: (addons-447248) Calling .Close
	I0910 17:30:34.693059   13790 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:34.693102   13790 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:34.693125   13790 main.go:141] libmachine: Making call to close driver server
	I0910 17:30:34.693138   13790 main.go:141] libmachine: (addons-447248) Calling .Close
	I0910 17:30:34.693106   13790 main.go:141] libmachine: (addons-447248) DBG | Closing plugin on server side
	I0910 17:30:34.693461   13790 main.go:141] libmachine: Successfully made call to close driver server
	I0910 17:30:34.693475   13790 main.go:141] libmachine: Making call to close connection to plugin binary
	I0910 17:30:34.695820   13790 addons.go:475] Verifying addon gcp-auth=true in "addons-447248"
	I0910 17:30:34.697516   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:34.697797   13790 out.go:177] * Verifying gcp-auth addon...
	I0910 17:30:34.699929   13790 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0910 17:30:34.791570   13790 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0910 17:30:34.954641   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:35.083257   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:35.083920   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:35.451047   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:35.588568   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:35.589238   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:35.782776   13790 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-zwwn8" in "kube-system" namespace has status "Ready":"False"
	I0910 17:30:35.951026   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:36.084774   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:36.084872   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:36.450967   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:36.583559   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:36.583768   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:36.950804   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:37.083897   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:37.084389   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:37.451342   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:37.585544   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:37.585693   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:37.951368   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:38.084592   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:38.084949   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:38.280807   13790 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-zwwn8" in "kube-system" namespace has status "Ready":"False"
	I0910 17:30:38.451163   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:38.583115   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:38.584704   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:38.950956   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:39.084826   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:39.085574   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:39.451371   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:39.585012   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:39.585066   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:39.950729   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:40.085143   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:40.087129   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:40.652588   13790 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-zwwn8" in "kube-system" namespace has status "Ready":"False"
	I0910 17:30:40.654398   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:40.655006   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:40.661046   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:40.951633   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:41.083288   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:41.083389   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:41.451494   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:41.584607   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:41.586961   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:41.781831   13790 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-zwwn8" in "kube-system" namespace has status "Ready":"True"
	I0910 17:30:41.781862   13790 pod_ready.go:82] duration metric: took 10.01040476s for pod "nvidia-device-plugin-daemonset-zwwn8" in "kube-system" namespace to be "Ready" ...
	I0910 17:30:41.781874   13790 pod_ready.go:39] duration metric: took 19.914323978s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 17:30:41.781896   13790 api_server.go:52] waiting for apiserver process to appear ...
	I0910 17:30:41.781959   13790 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 17:30:41.800135   13790 api_server.go:72] duration metric: took 23.757343265s to wait for apiserver process to appear ...
	I0910 17:30:41.800165   13790 api_server.go:88] waiting for apiserver healthz status ...
	I0910 17:30:41.800183   13790 api_server.go:253] Checking apiserver healthz at https://192.168.39.59:8443/healthz ...
	I0910 17:30:41.805501   13790 api_server.go:279] https://192.168.39.59:8443/healthz returned 200:
	ok
	I0910 17:30:41.806742   13790 api_server.go:141] control plane version: v1.31.0
	I0910 17:30:41.806770   13790 api_server.go:131] duration metric: took 6.597857ms to wait for apiserver health ...
	I0910 17:30:41.806780   13790 system_pods.go:43] waiting for kube-system pods to appear ...
	I0910 17:30:41.815411   13790 system_pods.go:59] 18 kube-system pods found
	I0910 17:30:41.815450   13790 system_pods.go:61] "coredns-6f6b679f8f-954g7" [eb0167e6-4bb4-4151-b01b-fa7a6196f4da] Running
	I0910 17:30:41.815458   13790 system_pods.go:61] "csi-hostpath-attacher-0" [6c21c7e0-f335-478a-a649-281f96fc7883] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0910 17:30:41.815465   13790 system_pods.go:61] "csi-hostpath-resizer-0" [3c0cab5c-7d30-4e15-84a5-42d30178f7e5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0910 17:30:41.815493   13790 system_pods.go:61] "csi-hostpathplugin-nkx7t" [c91f10dd-bd37-4516-a58b-5280effdfc39] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0910 17:30:41.815498   13790 system_pods.go:61] "etcd-addons-447248" [5c47dbd4-f1db-4018-ba78-1c89842c5b9b] Running
	I0910 17:30:41.815504   13790 system_pods.go:61] "kube-apiserver-addons-447248" [0cd57d18-08ed-4e3b-b52d-f3274358b11b] Running
	I0910 17:30:41.815508   13790 system_pods.go:61] "kube-controller-manager-addons-447248" [240a4db4-3c1b-4253-8bb9-81aca859ee7f] Running
	I0910 17:30:41.815518   13790 system_pods.go:61] "kube-ingress-dns-minikube" [ccebe56c-6499-449a-86fe-3409be2323ef] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0910 17:30:41.815522   13790 system_pods.go:61] "kube-proxy-r6wh7" [b62d8b0d-fc24-403e-909e-c50cd9d9a527] Running
	I0910 17:30:41.815527   13790 system_pods.go:61] "kube-scheduler-addons-447248" [c8e57668-ed15-4fc5-a7cb-87f298d62f9d] Running
	I0910 17:30:41.815532   13790 system_pods.go:61] "metrics-server-84c5f94fbc-j6wml" [211ad8e2-5d48-416b-9ba8-e4ddb773a576] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0910 17:30:41.815538   13790 system_pods.go:61] "nvidia-device-plugin-daemonset-zwwn8" [35ae1fa5-59e3-488b-ba97-e0cfeba39e93] Running
	I0910 17:30:41.815544   13790 system_pods.go:61] "registry-66c9cd494c-vdrtp" [85b87341-00c1-4bec-876a-9eabfeb2cb35] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0910 17:30:41.815548   13790 system_pods.go:61] "registry-proxy-vktjt" [8a998f90-a892-4121-b82b-dbe047da7b63] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0910 17:30:41.815556   13790 system_pods.go:61] "snapshot-controller-56fcc65765-qctfd" [2818fcd2-5b6b-4313-b029-23cad3db8281] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0910 17:30:41.815565   13790 system_pods.go:61] "snapshot-controller-56fcc65765-shfdh" [2c9164e4-2bd8-430b-aeb2-6ae162693314] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0910 17:30:41.815569   13790 system_pods.go:61] "storage-provisioner" [01ed3648-2356-4db8-a87a-f0db0cf3671e] Running
	I0910 17:30:41.815577   13790 system_pods.go:61] "tiller-deploy-b48cc5f79-925zh" [0f2d7fe9-69b8-477a-8a9d-a285eb7bfd9a] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0910 17:30:41.815583   13790 system_pods.go:74] duration metric: took 8.797914ms to wait for pod list to return data ...
	I0910 17:30:41.815592   13790 default_sa.go:34] waiting for default service account to be created ...
	I0910 17:30:41.819477   13790 default_sa.go:45] found service account: "default"
	I0910 17:30:41.819506   13790 default_sa.go:55] duration metric: took 3.907415ms for default service account to be created ...
	I0910 17:30:41.819518   13790 system_pods.go:116] waiting for k8s-apps to be running ...
	I0910 17:30:41.828871   13790 system_pods.go:86] 18 kube-system pods found
	I0910 17:30:41.828906   13790 system_pods.go:89] "coredns-6f6b679f8f-954g7" [eb0167e6-4bb4-4151-b01b-fa7a6196f4da] Running
	I0910 17:30:41.828916   13790 system_pods.go:89] "csi-hostpath-attacher-0" [6c21c7e0-f335-478a-a649-281f96fc7883] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0910 17:30:41.828922   13790 system_pods.go:89] "csi-hostpath-resizer-0" [3c0cab5c-7d30-4e15-84a5-42d30178f7e5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0910 17:30:41.828930   13790 system_pods.go:89] "csi-hostpathplugin-nkx7t" [c91f10dd-bd37-4516-a58b-5280effdfc39] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0910 17:30:41.828934   13790 system_pods.go:89] "etcd-addons-447248" [5c47dbd4-f1db-4018-ba78-1c89842c5b9b] Running
	I0910 17:30:41.828938   13790 system_pods.go:89] "kube-apiserver-addons-447248" [0cd57d18-08ed-4e3b-b52d-f3274358b11b] Running
	I0910 17:30:41.828942   13790 system_pods.go:89] "kube-controller-manager-addons-447248" [240a4db4-3c1b-4253-8bb9-81aca859ee7f] Running
	I0910 17:30:41.828948   13790 system_pods.go:89] "kube-ingress-dns-minikube" [ccebe56c-6499-449a-86fe-3409be2323ef] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0910 17:30:41.828951   13790 system_pods.go:89] "kube-proxy-r6wh7" [b62d8b0d-fc24-403e-909e-c50cd9d9a527] Running
	I0910 17:30:41.828955   13790 system_pods.go:89] "kube-scheduler-addons-447248" [c8e57668-ed15-4fc5-a7cb-87f298d62f9d] Running
	I0910 17:30:41.828960   13790 system_pods.go:89] "metrics-server-84c5f94fbc-j6wml" [211ad8e2-5d48-416b-9ba8-e4ddb773a576] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0910 17:30:41.828963   13790 system_pods.go:89] "nvidia-device-plugin-daemonset-zwwn8" [35ae1fa5-59e3-488b-ba97-e0cfeba39e93] Running
	I0910 17:30:41.828977   13790 system_pods.go:89] "registry-66c9cd494c-vdrtp" [85b87341-00c1-4bec-876a-9eabfeb2cb35] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0910 17:30:41.828988   13790 system_pods.go:89] "registry-proxy-vktjt" [8a998f90-a892-4121-b82b-dbe047da7b63] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0910 17:30:41.828994   13790 system_pods.go:89] "snapshot-controller-56fcc65765-qctfd" [2818fcd2-5b6b-4313-b029-23cad3db8281] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0910 17:30:41.829000   13790 system_pods.go:89] "snapshot-controller-56fcc65765-shfdh" [2c9164e4-2bd8-430b-aeb2-6ae162693314] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0910 17:30:41.829005   13790 system_pods.go:89] "storage-provisioner" [01ed3648-2356-4db8-a87a-f0db0cf3671e] Running
	I0910 17:30:41.829012   13790 system_pods.go:89] "tiller-deploy-b48cc5f79-925zh" [0f2d7fe9-69b8-477a-8a9d-a285eb7bfd9a] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0910 17:30:41.829020   13790 system_pods.go:126] duration metric: took 9.496453ms to wait for k8s-apps to be running ...
	I0910 17:30:41.829029   13790 system_svc.go:44] waiting for kubelet service to be running ....
	I0910 17:30:41.829074   13790 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 17:30:41.843686   13790 system_svc.go:56] duration metric: took 14.646843ms WaitForService to wait for kubelet
	I0910 17:30:41.843726   13790 kubeadm.go:582] duration metric: took 23.800938838s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 17:30:41.843750   13790 node_conditions.go:102] verifying NodePressure condition ...
	I0910 17:30:41.850378   13790 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0910 17:30:41.850406   13790 node_conditions.go:123] node cpu capacity is 2
	I0910 17:30:41.850420   13790 node_conditions.go:105] duration metric: took 6.665052ms to run NodePressure ...
	I0910 17:30:41.850433   13790 start.go:241] waiting for startup goroutines ...
	I0910 17:30:41.951223   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:42.084192   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:42.084527   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:42.451617   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:42.584854   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:42.585602   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:42.951474   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:43.084213   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:43.085222   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:43.451022   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:43.584687   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:43.584777   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:43.951447   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:44.084034   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:44.084301   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:44.450397   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:44.584504   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:44.584782   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:44.996479   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:45.084384   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:45.085277   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:45.454559   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:45.585778   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:45.585852   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:45.951296   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:46.083890   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:46.084109   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:46.451109   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:46.584348   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:46.584785   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:47.002054   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:47.104316   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:47.104364   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:47.450892   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:47.588791   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:47.588845   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:47.953064   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:48.104665   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:48.108261   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:48.450602   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:48.584821   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:48.585071   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:48.950948   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:49.084111   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:49.084851   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:49.451324   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:49.583306   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:49.584796   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:49.951646   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:50.083508   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:50.083654   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:50.450313   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:50.583674   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:50.584518   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:50.950446   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:51.085049   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:51.085400   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:51.451390   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:51.584793   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:51.584866   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:51.951287   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:52.083109   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:52.083995   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:52.451022   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:52.583502   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:52.583617   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:52.951849   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:53.083919   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:53.084065   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:53.451187   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:53.584348   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:53.584409   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:53.952116   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:54.084422   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:54.084844   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:54.451163   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:54.584138   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:54.584748   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:54.950315   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:55.084232   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:55.084305   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:55.703450   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:55.703611   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:55.703928   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:55.951655   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:56.085358   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:56.085448   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:56.451287   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:56.584827   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:56.585090   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:56.950445   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:57.083897   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:57.084004   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:57.450970   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:57.584822   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:57.585863   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:57.951408   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:58.083453   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:58.085143   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:58.450105   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:58.583208   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:58.584047   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:58.951101   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:59.083766   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:59.084038   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:59.450468   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:59.584043   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:59.584465   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:30:59.951354   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:00.084922   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:31:00.085437   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:00.450736   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:00.583926   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:31:00.584256   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:01.078194   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:01.083725   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:31:01.084134   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:01.452426   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:01.584259   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:31:01.584686   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:02.078094   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:02.084949   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:31:02.085623   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:02.451117   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:02.583832   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:31:02.584174   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:02.951358   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:03.086319   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:31:03.090644   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:03.451423   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:03.584284   13790 kapi.go:107] duration metric: took 34.504934853s to wait for kubernetes.io/minikube-addons=registry ...
	I0910 17:31:03.584658   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:03.950973   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:04.084385   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:04.451008   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:04.584742   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:04.952753   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:05.088525   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:05.452496   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:05.583271   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:05.951337   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:06.083823   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:06.450861   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:06.585838   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:06.951281   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:07.083499   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:07.452095   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:07.583234   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:07.959147   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:08.085680   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:08.452185   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:08.583502   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:08.953723   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:09.083557   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:09.451865   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:09.584592   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:09.950791   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:10.084257   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:10.450904   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:10.583795   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:10.950950   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:11.084080   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:11.450601   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:11.583605   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:11.950744   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:12.083896   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:12.450900   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:12.584114   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:12.951770   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:13.251458   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:13.454130   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:13.584360   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:13.951332   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:14.084406   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:14.455635   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:14.603215   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:14.950763   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:15.084171   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:15.452243   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:15.583863   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:15.950853   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:16.083827   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:16.452456   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:16.583669   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:16.951110   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:17.082913   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:17.451047   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:17.719211   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:17.950824   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:18.084180   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:18.670733   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:18.671970   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:18.950503   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:19.083274   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:19.451489   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:19.583457   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:19.951206   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:20.084127   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:20.453685   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:20.586081   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:20.950842   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:21.084474   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:21.451141   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:21.584210   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:21.951450   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:22.083341   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:22.451903   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:22.583989   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:22.950428   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:23.083578   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:23.451604   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:23.583536   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:23.950495   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:24.083824   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:24.452513   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:24.584299   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:24.951041   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:25.084609   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:25.450613   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:25.584039   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:25.950773   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:26.084446   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:26.451489   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:26.608721   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:26.951984   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:27.087016   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:27.452718   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:27.584338   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:27.950855   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:28.084094   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:28.455232   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:28.583006   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:28.952522   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:29.083253   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:29.453776   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:29.590897   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:29.951310   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:30.085381   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:30.452503   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:30.583534   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:30.951198   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:31.083643   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:31.450951   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:31.584063   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:31.950562   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:32.084079   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:32.451247   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:32.583365   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:32.951888   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:33.087519   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:33.451325   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:33.584256   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:33.952195   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:34.083882   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:34.451453   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:34.584208   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:34.951228   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:35.083630   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:35.450497   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:35.583904   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:35.956216   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:36.090302   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:36.460938   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:36.585857   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:36.952953   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:37.083978   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:37.451423   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:37.584153   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:37.953705   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:38.092340   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:38.456915   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:38.598395   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:38.958355   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:39.091099   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:39.459623   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:39.595116   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:39.969005   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:40.093911   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:40.455129   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:40.583923   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:40.951232   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:41.083512   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:41.450573   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:41.583534   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:41.951561   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:42.083506   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:42.451286   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:42.584342   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:42.950844   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:43.084304   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:43.451187   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:43.583976   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:43.951012   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:44.084445   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:44.451922   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:44.584500   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:44.951747   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:45.083413   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:45.451251   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:45.584144   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:45.951862   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:46.084291   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:46.453126   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:46.835675   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:46.952818   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:47.086695   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:47.452634   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:47.583250   13790 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0910 17:31:47.951377   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:48.083099   13790 kapi.go:107] duration metric: took 1m19.003883078s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0910 17:31:48.451621   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:48.953113   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:49.451578   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:49.953806   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:50.493511   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:50.952904   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:51.451100   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:51.950788   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:52.451124   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:52.951846   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:53.451916   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:53.950808   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:54.451588   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:54.951793   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:31:55.452996   13790 kapi.go:107] duration metric: took 1m22.506779735s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0910 17:31:57.704345   13790 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0910 17:31:57.704370   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:58.204323   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:58.704504   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:59.204289   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:59.704053   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:00.203336   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:00.704700   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:01.204684   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:01.704916   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:02.203242   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:02.704306   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:03.204331   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:03.704127   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:04.204202   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:04.703996   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:05.205187   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:05.704015   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:06.204208   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:06.703837   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:07.203488   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:07.704141   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:08.204089   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:08.703599   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:09.204658   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:09.704227   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:10.203695   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:10.704465   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:11.204989   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:11.703442   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:12.204342   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:12.704725   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:13.204845   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:13.703947   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:14.204014   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:14.703345   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:15.203935   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:15.703538   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:16.204518   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:16.704243   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:17.204551   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:17.704625   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:18.203342   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:18.703996   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:19.204223   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:19.705765   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:20.203390   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:20.704128   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:21.204640   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:21.704065   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:22.203865   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:22.703711   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:23.205037   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:23.703990   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:24.203958   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:24.703886   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:25.203826   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:25.703793   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:26.204817   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:26.704186   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:27.203895   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:27.703446   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:28.205276   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:28.703560   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:29.204845   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:29.706071   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:30.203257   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:30.704167   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:31.204158   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:31.703361   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:32.204039   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:32.703787   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:33.204891   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:33.703237   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:34.203640   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:34.705271   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:35.204597   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:35.704484   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:36.204366   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:36.703591   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:37.205358   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:37.710894   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:38.203404   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:38.705227   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:39.203712   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:39.704304   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:40.204342   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:40.703795   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:41.204783   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:41.705575   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:42.204113   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:42.704070   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:43.203895   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:43.703268   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:44.203844   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:44.703432   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:45.203874   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:45.704663   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:46.204949   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:46.703796   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:47.202728   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:47.703789   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:48.204294   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:48.704066   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:49.203334   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:49.705027   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:50.203765   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:50.704340   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:51.204234   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:51.704322   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:52.204665   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:52.704663   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:53.204403   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:53.704362   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:54.203531   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:54.704486   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:55.205120   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:55.703875   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:56.203623   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:56.704779   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:57.203674   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:57.704725   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:58.203460   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:58.703772   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:59.203340   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:32:59.704355   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:33:00.203454   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:33:00.704084   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:33:01.203507   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:33:01.704510   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:33:02.204558   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:33:02.704055   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:33:03.203616   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:33:03.704416   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:33:04.204068   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:33:04.703858   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:33:05.204965   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:33:05.704835   13790 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:33:06.204046   13790 kapi.go:107] duration metric: took 2m31.504115952s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0910 17:33:06.205894   13790 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-447248 cluster.
	I0910 17:33:06.207292   13790 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0910 17:33:06.208694   13790 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0910 17:33:06.210006   13790 out.go:177] * Enabled addons: ingress-dns, storage-provisioner, helm-tiller, cloud-spanner, nvidia-device-plugin, storage-provisioner-rancher, metrics-server, volcano, inspektor-gadget, yakd, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0910 17:33:06.211247   13790 addons.go:510] duration metric: took 2m48.168386531s for enable addons: enabled=[ingress-dns storage-provisioner helm-tiller cloud-spanner nvidia-device-plugin storage-provisioner-rancher metrics-server volcano inspektor-gadget yakd volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0910 17:33:06.211285   13790 start.go:246] waiting for cluster config update ...
	I0910 17:33:06.211305   13790 start.go:255] writing updated cluster config ...
	I0910 17:33:06.211564   13790 ssh_runner.go:195] Run: rm -f paused
	I0910 17:33:06.263901   13790 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0910 17:33:06.266034   13790 out.go:177] * Done! kubectl is now configured to use "addons-447248" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 10 17:42:59 addons-447248 dockerd[1198]: time="2024-09-10T17:42:59.960132648Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 17:42:59 addons-447248 dockerd[1198]: time="2024-09-10T17:42:59.960143524Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 17:42:59 addons-447248 dockerd[1198]: time="2024-09-10T17:42:59.960266330Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 17:43:01 addons-447248 dockerd[1198]: time="2024-09-10T17:43:01.896695193Z" level=info msg="shim disconnected" id=63476bb0b6ac7ad1343a497f9d4fcf1f8a4d6d279b33408334692299e50953df namespace=moby
	Sep 10 17:43:01 addons-447248 dockerd[1198]: time="2024-09-10T17:43:01.897139626Z" level=warning msg="cleaning up after shim disconnected" id=63476bb0b6ac7ad1343a497f9d4fcf1f8a4d6d279b33408334692299e50953df namespace=moby
	Sep 10 17:43:01 addons-447248 dockerd[1198]: time="2024-09-10T17:43:01.897319763Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 17:43:01 addons-447248 dockerd[1191]: time="2024-09-10T17:43:01.899412392Z" level=info msg="ignoring event" container=63476bb0b6ac7ad1343a497f9d4fcf1f8a4d6d279b33408334692299e50953df module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 17:43:02 addons-447248 dockerd[1191]: time="2024-09-10T17:43:02.343336762Z" level=info msg="ignoring event" container=c4c8b4e7237dc25625807a0fe46b4188de565ff3f5c087fa2378da5b3ca9f3d2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 17:43:02 addons-447248 dockerd[1198]: time="2024-09-10T17:43:02.344570800Z" level=info msg="shim disconnected" id=c4c8b4e7237dc25625807a0fe46b4188de565ff3f5c087fa2378da5b3ca9f3d2 namespace=moby
	Sep 10 17:43:02 addons-447248 dockerd[1198]: time="2024-09-10T17:43:02.344632025Z" level=warning msg="cleaning up after shim disconnected" id=c4c8b4e7237dc25625807a0fe46b4188de565ff3f5c087fa2378da5b3ca9f3d2 namespace=moby
	Sep 10 17:43:02 addons-447248 dockerd[1198]: time="2024-09-10T17:43:02.344643164Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 17:43:02 addons-447248 dockerd[1191]: time="2024-09-10T17:43:02.423004983Z" level=info msg="ignoring event" container=da939059963da96730fd283573b6f31501f86782f4cce02173feceeae8da48f7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 17:43:02 addons-447248 dockerd[1198]: time="2024-09-10T17:43:02.424758446Z" level=info msg="shim disconnected" id=da939059963da96730fd283573b6f31501f86782f4cce02173feceeae8da48f7 namespace=moby
	Sep 10 17:43:02 addons-447248 dockerd[1198]: time="2024-09-10T17:43:02.424922802Z" level=warning msg="cleaning up after shim disconnected" id=da939059963da96730fd283573b6f31501f86782f4cce02173feceeae8da48f7 namespace=moby
	Sep 10 17:43:02 addons-447248 dockerd[1198]: time="2024-09-10T17:43:02.424993813Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 17:43:02 addons-447248 dockerd[1191]: time="2024-09-10T17:43:02.524280654Z" level=info msg="ignoring event" container=090bd8d8174aa7e3915cc93d268f4adaba3c29a527ae0a2c4316f5babcf0a610 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 17:43:02 addons-447248 dockerd[1198]: time="2024-09-10T17:43:02.524417020Z" level=info msg="shim disconnected" id=090bd8d8174aa7e3915cc93d268f4adaba3c29a527ae0a2c4316f5babcf0a610 namespace=moby
	Sep 10 17:43:02 addons-447248 dockerd[1198]: time="2024-09-10T17:43:02.524633587Z" level=warning msg="cleaning up after shim disconnected" id=090bd8d8174aa7e3915cc93d268f4adaba3c29a527ae0a2c4316f5babcf0a610 namespace=moby
	Sep 10 17:43:02 addons-447248 dockerd[1198]: time="2024-09-10T17:43:02.524705709Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 17:43:02 addons-447248 dockerd[1198]: time="2024-09-10T17:43:02.546259956Z" level=warning msg="cleanup warnings time=\"2024-09-10T17:43:02Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Sep 10 17:43:02 addons-447248 dockerd[1191]: time="2024-09-10T17:43:02.653358502Z" level=info msg="ignoring event" container=652645be0d890629757c1c405364d195f75d12150d06990110ee5d9e9d29d2a1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 17:43:02 addons-447248 dockerd[1198]: time="2024-09-10T17:43:02.655363467Z" level=info msg="shim disconnected" id=652645be0d890629757c1c405364d195f75d12150d06990110ee5d9e9d29d2a1 namespace=moby
	Sep 10 17:43:02 addons-447248 dockerd[1198]: time="2024-09-10T17:43:02.655433993Z" level=warning msg="cleaning up after shim disconnected" id=652645be0d890629757c1c405364d195f75d12150d06990110ee5d9e9d29d2a1 namespace=moby
	Sep 10 17:43:02 addons-447248 dockerd[1198]: time="2024-09-10T17:43:02.655444276Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 17:43:02 addons-447248 dockerd[1198]: time="2024-09-10T17:43:02.678838100Z" level=warning msg="cleanup warnings time=\"2024-09-10T17:43:02Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	45391fe62ac30       ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971                        4 seconds ago       Running             headlamp                  0                   76868ac25c8d4       headlamp-57fb76fcdb-hdfcd
	61f8904d9f294       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                                  23 seconds ago      Running             hello-world-app           0                   fb8974d36e61d       hello-world-app-55bf9c44b4-q54hf
	186dadddc7ffa       nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf                                                33 seconds ago      Running             nginx                     0                   77708fe42f63a       nginx
	dfacd2810e974       alpine/helm@sha256:9d9fab00e0680f1328924429925595dfe96a68531c8a9c1518d05ee2ad45c36f                                          40 seconds ago      Exited              helm-test                 0                   c7743fc54fa1b       helm-test
	c29f65fee3d53       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 9 minutes ago       Running             gcp-auth                  0                   7242abf7d2773       gcp-auth-89d5ffd79-fpjch
	14968a937c452       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              patch                     0                   16d663fd726eb       ingress-nginx-admission-patch-qnqdr
	8f14fcd1d14ab       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              create                    0                   bc0ea2b74b438       ingress-nginx-admission-create-7jvcf
	9b5fc0da663e2       6e38f40d628db                                                                                                                12 minutes ago      Running             storage-provisioner       0                   8e71e31233c51       storage-provisioner
	33e133ff95edc       cbb01a7bd410d                                                                                                                12 minutes ago      Running             coredns                   0                   b8c886e6df952       coredns-6f6b679f8f-954g7
	cc8465d270e62       ad83b2ca7b09e                                                                                                                12 minutes ago      Running             kube-proxy                0                   c96c70df00c85       kube-proxy-r6wh7
	f3b6b0755f204       1766f54c897f0                                                                                                                12 minutes ago      Running             kube-scheduler            0                   ed7ba16b83ae9       kube-scheduler-addons-447248
	f0511e42ae928       2e96e5913fc06                                                                                                                12 minutes ago      Running             etcd                      0                   928fd8c35eb22       etcd-addons-447248
	e5e603932e5d4       604f5db92eaa8                                                                                                                12 minutes ago      Running             kube-apiserver            0                   5ee600341ea50       kube-apiserver-addons-447248
	070499a6c70bc       045733566833c                                                                                                                12 minutes ago      Running             kube-controller-manager   0                   b50e510b8e759       kube-controller-manager-addons-447248
	
	
	==> coredns [33e133ff95ed] <==
	[INFO] 10.244.0.22:50587 - 23412 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000485432s
	[INFO] 10.244.0.22:50587 - 32350 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000236805s
	[INFO] 10.244.0.22:50587 - 32196 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000244612s
	[INFO] 10.244.0.22:50587 - 35430 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000370829s
	[INFO] 10.244.0.22:50587 - 26168 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00012288s
	[INFO] 10.244.0.22:47269 - 65078 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000036983s
	[INFO] 10.244.0.22:47269 - 60262 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000054745s
	[INFO] 10.244.0.22:47269 - 56093 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000052834s
	[INFO] 10.244.0.22:47269 - 61968 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000044801s
	[INFO] 10.244.0.22:47269 - 9717 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000178111s
	[INFO] 10.244.0.22:47269 - 32879 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000115091s
	[INFO] 10.244.0.22:52378 - 65064 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000039125s
	[INFO] 10.244.0.22:52378 - 45062 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000137684s
	[INFO] 10.244.0.22:52378 - 55708 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000036717s
	[INFO] 10.244.0.22:47487 - 4464 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000154689s
	[INFO] 10.244.0.22:47487 - 56420 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000040823s
	[INFO] 10.244.0.22:47487 - 18549 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000028278s
	[INFO] 10.244.0.22:47487 - 28241 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000109668s
	[INFO] 10.244.0.22:52378 - 58562 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000045662s
	[INFO] 10.244.0.22:47487 - 8739 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000027697s
	[INFO] 10.244.0.22:47487 - 11947 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000095249s
	[INFO] 10.244.0.22:52378 - 43832 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000072742s
	[INFO] 10.244.0.22:52378 - 27547 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000032105s
	[INFO] 10.244.0.22:47487 - 34148 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000039095s
	[INFO] 10.244.0.22:52378 - 62524 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000066632s
	
	
	==> describe nodes <==
	Name:               addons-447248
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-447248
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18
	                    minikube.k8s.io/name=addons-447248
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_10T17_30_13_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-447248
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Sep 2024 17:30:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-447248
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Sep 2024 17:42:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Sep 2024 17:42:47 +0000   Tue, 10 Sep 2024 17:30:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Sep 2024 17:42:47 +0000   Tue, 10 Sep 2024 17:30:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Sep 2024 17:42:47 +0000   Tue, 10 Sep 2024 17:30:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Sep 2024 17:42:47 +0000   Tue, 10 Sep 2024 17:30:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.59
	  Hostname:    addons-447248
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 027b056e9d36400abfdd6cf6a67d71a0
	  System UUID:                027b056e-9d36-400a-bfdd-6cf6a67d71a0
	  Boot ID:                    3515f27f-6095-4e57-aa33-dd122ddfa407
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.0
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m15s
	  default                     hello-world-app-55bf9c44b4-q54hf         0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         37s
	  gcp-auth                    gcp-auth-89d5ffd79-fpjch                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  headlamp                    headlamp-57fb76fcdb-hdfcd                0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-6f6b679f8f-954g7                 100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 etcd-addons-447248                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-447248             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-447248    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-r6wh7                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-447248             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node addons-447248 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node addons-447248 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node addons-447248 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node addons-447248 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node addons-447248 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet          Node addons-447248 status is now: NodeHasSufficientPID
	  Normal  NodeReady                12m                kubelet          Node addons-447248 status is now: NodeReady
	  Normal  RegisteredNode           12m                node-controller  Node addons-447248 event: Registered Node addons-447248 in Controller
	
	
	==> dmesg <==
	[  +5.885876] kauditd_printk_skb: 27 callbacks suppressed
	[  +5.082443] kauditd_printk_skb: 41 callbacks suppressed
	[ +10.831445] kauditd_printk_skb: 40 callbacks suppressed
	[  +5.966663] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.008076] kauditd_printk_skb: 36 callbacks suppressed
	[Sep10 17:32] kauditd_printk_skb: 28 callbacks suppressed
	[Sep10 17:33] kauditd_printk_skb: 40 callbacks suppressed
	[  +9.595033] kauditd_printk_skb: 9 callbacks suppressed
	[ +11.856939] kauditd_printk_skb: 28 callbacks suppressed
	[  +7.116094] kauditd_printk_skb: 2 callbacks suppressed
	[ +18.746238] kauditd_printk_skb: 20 callbacks suppressed
	[Sep10 17:34] kauditd_printk_skb: 2 callbacks suppressed
	[Sep10 17:37] kauditd_printk_skb: 28 callbacks suppressed
	[Sep10 17:41] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.793287] kauditd_printk_skb: 15 callbacks suppressed
	[Sep10 17:42] kauditd_printk_skb: 29 callbacks suppressed
	[  +6.479992] kauditd_printk_skb: 39 callbacks suppressed
	[  +5.930787] kauditd_printk_skb: 7 callbacks suppressed
	[  +6.476959] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.245409] kauditd_printk_skb: 15 callbacks suppressed
	[  +6.881706] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.165837] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.550337] kauditd_printk_skb: 19 callbacks suppressed
	[  +7.635964] kauditd_printk_skb: 83 callbacks suppressed
	[  +8.400178] kauditd_printk_skb: 33 callbacks suppressed
	
	
	==> etcd [f0511e42ae92] <==
	{"level":"info","ts":"2024-09-10T17:31:20.353950Z","caller":"traceutil/trace.go:171","msg":"trace[1929265029] transaction","detail":"{read_only:false; response_revision:1143; number_of_response:1; }","duration":"123.798843ms","start":"2024-09-10T17:31:20.230125Z","end":"2024-09-10T17:31:20.353924Z","steps":["trace[1929265029] 'process raft request'  (duration: 39.983536ms)","trace[1929265029] 'compare'  (duration: 82.001215ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-10T17:31:20.354566Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.758066ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-10T17:31:20.354602Z","caller":"traceutil/trace.go:171","msg":"trace[287440950] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1143; }","duration":"107.806516ms","start":"2024-09-10T17:31:20.246786Z","end":"2024-09-10T17:31:20.354592Z","steps":["trace[287440950] 'agreement among raft nodes before linearized reading'  (duration: 107.728297ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-10T17:31:40.802640Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"119.038546ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-10T17:31:40.802703Z","caller":"traceutil/trace.go:171","msg":"trace[1854998565] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1250; }","duration":"119.129126ms","start":"2024-09-10T17:31:40.683563Z","end":"2024-09-10T17:31:40.802692Z","steps":["trace[1854998565] 'range keys from in-memory index tree'  (duration: 118.86183ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-10T17:31:46.812314Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"271.855763ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-09-10T17:31:46.812642Z","caller":"traceutil/trace.go:171","msg":"trace[1080120900] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1264; }","duration":"272.184881ms","start":"2024-09-10T17:31:46.540438Z","end":"2024-09-10T17:31:46.812622Z","steps":["trace[1080120900] 'range keys from in-memory index tree'  (duration: 271.754903ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-10T17:31:46.812717Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.641262ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-10T17:31:46.812778Z","caller":"traceutil/trace.go:171","msg":"trace[707923923] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1264; }","duration":"129.707675ms","start":"2024-09-10T17:31:46.683060Z","end":"2024-09-10T17:31:46.812768Z","steps":["trace[707923923] 'range keys from in-memory index tree'  (duration: 129.490904ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-10T17:31:46.813118Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"250.451319ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-10T17:31:46.813390Z","caller":"traceutil/trace.go:171","msg":"trace[434049286] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1264; }","duration":"250.726681ms","start":"2024-09-10T17:31:46.562651Z","end":"2024-09-10T17:31:46.813378Z","steps":["trace[434049286] 'range keys from in-memory index tree'  (duration: 250.407303ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-10T17:33:29.716412Z","caller":"traceutil/trace.go:171","msg":"trace[1113105238] transaction","detail":"{read_only:false; response_revision:1596; number_of_response:1; }","duration":"283.628179ms","start":"2024-09-10T17:33:29.432732Z","end":"2024-09-10T17:33:29.716360Z","steps":["trace[1113105238] 'process raft request'  (duration: 283.300922ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-10T17:33:30.357225Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"268.34833ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:498"}
	{"level":"info","ts":"2024-09-10T17:33:30.357325Z","caller":"traceutil/trace.go:171","msg":"trace[943533811] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1597; }","duration":"268.555687ms","start":"2024-09-10T17:33:30.088759Z","end":"2024-09-10T17:33:30.357315Z","steps":["trace[943533811] 'range keys from in-memory index tree'  (duration: 268.209441ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-10T17:33:30.357905Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"219.382588ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-10T17:33:30.358236Z","caller":"traceutil/trace.go:171","msg":"trace[1551314789] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1597; }","duration":"220.485294ms","start":"2024-09-10T17:33:30.137737Z","end":"2024-09-10T17:33:30.358223Z","steps":["trace[1551314789] 'range keys from in-memory index tree'  (duration: 219.372926ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-10T17:40:09.335629Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1903}
	{"level":"info","ts":"2024-09-10T17:40:09.426480Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1903,"took":"89.318704ms","hash":4277115753,"current-db-size-bytes":8953856,"current-db-size":"9.0 MB","current-db-size-in-use-bytes":5017600,"current-db-size-in-use":"5.0 MB"}
	{"level":"info","ts":"2024-09-10T17:40:09.426773Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4277115753,"revision":1903,"compact-revision":-1}
	{"level":"info","ts":"2024-09-10T17:42:59.358593Z","caller":"traceutil/trace.go:171","msg":"trace[833968809] linearizableReadLoop","detail":"{readStateIndex:3332; appliedIndex:3331; }","duration":"112.871154ms","start":"2024-09-10T17:42:59.245682Z","end":"2024-09-10T17:42:59.358554Z","steps":["trace[833968809] 'read index received'  (duration: 112.709981ms)","trace[833968809] 'applied index is now lower than readState.Index'  (duration: 160.751µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-10T17:42:59.358801Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"113.073324ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-10T17:42:59.358791Z","caller":"traceutil/trace.go:171","msg":"trace[373480421] transaction","detail":"{read_only:false; response_revision:3131; number_of_response:1; }","duration":"142.996949ms","start":"2024-09-10T17:42:59.215775Z","end":"2024-09-10T17:42:59.358772Z","steps":["trace[373480421] 'process raft request'  (duration: 142.634418ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-10T17:42:59.358831Z","caller":"traceutil/trace.go:171","msg":"trace[1729930887] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:3131; }","duration":"113.159854ms","start":"2024-09-10T17:42:59.245663Z","end":"2024-09-10T17:42:59.358823Z","steps":["trace[1729930887] 'agreement among raft nodes before linearized reading'  (duration: 113.054606ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-10T17:42:59.904601Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"192.282311ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-10T17:42:59.904964Z","caller":"traceutil/trace.go:171","msg":"trace[13724041] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:3132; }","duration":"192.640265ms","start":"2024-09-10T17:42:59.712277Z","end":"2024-09-10T17:42:59.904917Z","steps":["trace[13724041] 'range keys from in-memory index tree'  (duration: 192.267538ms)"],"step_count":1}
	
	
	==> gcp-auth [c29f65fee3d5] <==
	2024/09/10 17:33:48 Ready to write response ...
	2024/09/10 17:41:50 Ready to marshal response ...
	2024/09/10 17:41:50 Ready to write response ...
	2024/09/10 17:41:50 Ready to marshal response ...
	2024/09/10 17:41:50 Ready to write response ...
	2024/09/10 17:42:01 Ready to marshal response ...
	2024/09/10 17:42:01 Ready to write response ...
	2024/09/10 17:42:02 Ready to marshal response ...
	2024/09/10 17:42:02 Ready to write response ...
	2024/09/10 17:42:07 Ready to marshal response ...
	2024/09/10 17:42:07 Ready to write response ...
	2024/09/10 17:42:20 Ready to marshal response ...
	2024/09/10 17:42:20 Ready to write response ...
	2024/09/10 17:42:26 Ready to marshal response ...
	2024/09/10 17:42:26 Ready to write response ...
	2024/09/10 17:42:35 Ready to marshal response ...
	2024/09/10 17:42:35 Ready to write response ...
	2024/09/10 17:42:38 Ready to marshal response ...
	2024/09/10 17:42:38 Ready to write response ...
	2024/09/10 17:42:54 Ready to marshal response ...
	2024/09/10 17:42:54 Ready to write response ...
	2024/09/10 17:42:54 Ready to marshal response ...
	2024/09/10 17:42:54 Ready to write response ...
	2024/09/10 17:42:54 Ready to marshal response ...
	2024/09/10 17:42:54 Ready to write response ...
	
	
	==> kernel <==
	 17:43:03 up 13 min,  0 users,  load average: 2.39, 1.03, 0.64
	Linux addons-447248 5.10.207 #1 SMP Tue Sep 10 01:47:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [e5e603932e5d] <==
	I0910 17:42:16.046084       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0910 17:42:18.549649       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0910 17:42:23.819684       1 conn.go:339] Error on socket receive: read tcp 192.168.39.59:8443->192.168.39.1:47530: use of closed network connection
	I0910 17:42:26.728075       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0910 17:42:26.937906       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.107.67.44"}
	I0910 17:42:38.407306       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.96.74.159"}
	E0910 17:42:40.975664       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	E0910 17:42:41.572831       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	E0910 17:42:41.578788       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	I0910 17:42:51.541726       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0910 17:42:52.573903       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0910 17:42:52.911356       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0910 17:42:52.917000       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0910 17:42:52.960259       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0910 17:42:52.960579       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0910 17:42:52.980599       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0910 17:42:52.980653       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0910 17:42:53.012922       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0910 17:42:53.012968       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0910 17:42:53.081538       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0910 17:42:53.081586       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0910 17:42:54.013513       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0910 17:42:54.082015       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0910 17:42:54.106188       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0910 17:42:54.285119       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.108.70.193"}
	
	
	==> kube-controller-manager [070499a6c70b] <==
	W0910 17:42:54.927090       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0910 17:42:54.927128       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0910 17:42:55.245137       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0910 17:42:55.245233       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0910 17:42:56.672550       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0910 17:42:56.673296       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0910 17:42:57.018840       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0910 17:42:57.018884       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0910 17:42:57.166515       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0910 17:42:57.166603       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0910 17:42:58.058516       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0910 17:42:58.058564       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0910 17:43:01.078562       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="49.555µs"
	I0910 17:43:01.109758       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="12.81142ms"
	I0910 17:43:01.110135       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="47.93µs"
	W0910 17:43:01.415421       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0910 17:43:01.415709       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0910 17:43:01.805654       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gadget"
	W0910 17:43:01.867787       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0910 17:43:01.867829       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0910 17:43:02.263236       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="3.706µs"
	W0910 17:43:02.492653       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0910 17:43:02.492707       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0910 17:43:02.621142       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0910 17:43:02.621253       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [cc8465d270e6] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0910 17:30:19.640705       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0910 17:30:19.657356       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.59"]
	E0910 17:30:19.657441       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0910 17:30:19.730973       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0910 17:30:19.731020       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0910 17:30:19.731047       1 server_linux.go:169] "Using iptables Proxier"
	I0910 17:30:19.743496       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0910 17:30:19.743855       1 server.go:483] "Version info" version="v1.31.0"
	I0910 17:30:19.743868       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0910 17:30:19.746106       1 config.go:197] "Starting service config controller"
	I0910 17:30:19.746132       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0910 17:30:19.748325       1 config.go:104] "Starting endpoint slice config controller"
	I0910 17:30:19.748337       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0910 17:30:19.748816       1 config.go:326] "Starting node config controller"
	I0910 17:30:19.748824       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0910 17:30:19.849025       1 shared_informer.go:320] Caches are synced for node config
	I0910 17:30:19.849049       1 shared_informer.go:320] Caches are synced for service config
	I0910 17:30:19.849068       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [f3b6b0755f20] <==
	W0910 17:30:10.634643       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0910 17:30:10.636475       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0910 17:30:10.634678       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0910 17:30:10.636662       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 17:30:11.474924       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0910 17:30:11.475030       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 17:30:11.616368       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0910 17:30:11.616419       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0910 17:30:11.704781       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0910 17:30:11.705331       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0910 17:30:11.712822       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0910 17:30:11.713556       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0910 17:30:11.725301       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0910 17:30:11.725543       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 17:30:11.826019       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0910 17:30:11.826293       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 17:30:11.842797       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0910 17:30:11.843237       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0910 17:30:11.890922       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0910 17:30:11.892295       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 17:30:11.909051       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0910 17:30:11.909144       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0910 17:30:11.953427       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0910 17:30:11.953659       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0910 17:30:14.125256       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 10 17:42:55 addons-447248 kubelet[1966]: I0910 17:42:55.142839    1966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9567f0ed-7d9f-453b-8a5c-38b6399e533e" path="/var/lib/kubelet/pods/9567f0ed-7d9f-453b-8a5c-38b6399e533e/volumes"
	Sep 10 17:42:57 addons-447248 kubelet[1966]: E0910 17:42:57.139895    1966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="74960b24-400e-4e9a-8577-2e9e4cb97f4a"
	Sep 10 17:43:01 addons-447248 kubelet[1966]: I0910 17:43:01.080055    1966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="headlamp/headlamp-57fb76fcdb-hdfcd" podStartSLOduration=3.312959518 podStartE2EDuration="7.080033022s" podCreationTimestamp="2024-09-10 17:42:54 +0000 UTC" firstStartedPulling="2024-09-10 17:42:55.816262733 +0000 UTC m=+762.835690610" lastFinishedPulling="2024-09-10 17:42:59.583336234 +0000 UTC m=+766.602764114" observedRunningTime="2024-09-10 17:43:01.079520336 +0000 UTC m=+768.098948213" watchObservedRunningTime="2024-09-10 17:43:01.080033022 +0000 UTC m=+768.099460918"
	Sep 10 17:43:02 addons-447248 kubelet[1966]: I0910 17:43:02.066556    1966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/6859bb53-6de9-4a60-b766-4a4c42e9d25d-gcp-creds\") pod \"6859bb53-6de9-4a60-b766-4a4c42e9d25d\" (UID: \"6859bb53-6de9-4a60-b766-4a4c42e9d25d\") "
	Sep 10 17:43:02 addons-447248 kubelet[1966]: I0910 17:43:02.066627    1966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dlgk\" (UniqueName: \"kubernetes.io/projected/6859bb53-6de9-4a60-b766-4a4c42e9d25d-kube-api-access-2dlgk\") pod \"6859bb53-6de9-4a60-b766-4a4c42e9d25d\" (UID: \"6859bb53-6de9-4a60-b766-4a4c42e9d25d\") "
	Sep 10 17:43:02 addons-447248 kubelet[1966]: I0910 17:43:02.067053    1966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6859bb53-6de9-4a60-b766-4a4c42e9d25d-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "6859bb53-6de9-4a60-b766-4a4c42e9d25d" (UID: "6859bb53-6de9-4a60-b766-4a4c42e9d25d"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 10 17:43:02 addons-447248 kubelet[1966]: I0910 17:43:02.069027    1966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6859bb53-6de9-4a60-b766-4a4c42e9d25d-kube-api-access-2dlgk" (OuterVolumeSpecName: "kube-api-access-2dlgk") pod "6859bb53-6de9-4a60-b766-4a4c42e9d25d" (UID: "6859bb53-6de9-4a60-b766-4a4c42e9d25d"). InnerVolumeSpecName "kube-api-access-2dlgk". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 10 17:43:02 addons-447248 kubelet[1966]: I0910 17:43:02.166944    1966 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/6859bb53-6de9-4a60-b766-4a4c42e9d25d-gcp-creds\") on node \"addons-447248\" DevicePath \"\""
	Sep 10 17:43:02 addons-447248 kubelet[1966]: I0910 17:43:02.166987    1966 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-2dlgk\" (UniqueName: \"kubernetes.io/projected/6859bb53-6de9-4a60-b766-4a4c42e9d25d-kube-api-access-2dlgk\") on node \"addons-447248\" DevicePath \"\""
	Sep 10 17:43:02 addons-447248 kubelet[1966]: I0910 17:43:02.670395    1966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwp4m\" (UniqueName: \"kubernetes.io/projected/85b87341-00c1-4bec-876a-9eabfeb2cb35-kube-api-access-qwp4m\") pod \"85b87341-00c1-4bec-876a-9eabfeb2cb35\" (UID: \"85b87341-00c1-4bec-876a-9eabfeb2cb35\") "
	Sep 10 17:43:02 addons-447248 kubelet[1966]: I0910 17:43:02.673864    1966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85b87341-00c1-4bec-876a-9eabfeb2cb35-kube-api-access-qwp4m" (OuterVolumeSpecName: "kube-api-access-qwp4m") pod "85b87341-00c1-4bec-876a-9eabfeb2cb35" (UID: "85b87341-00c1-4bec-876a-9eabfeb2cb35"). InnerVolumeSpecName "kube-api-access-qwp4m". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 10 17:43:02 addons-447248 kubelet[1966]: I0910 17:43:02.771922    1966 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6hwx6\" (UniqueName: \"kubernetes.io/projected/8a998f90-a892-4121-b82b-dbe047da7b63-kube-api-access-6hwx6\") pod \"8a998f90-a892-4121-b82b-dbe047da7b63\" (UID: \"8a998f90-a892-4121-b82b-dbe047da7b63\") "
	Sep 10 17:43:02 addons-447248 kubelet[1966]: I0910 17:43:02.772141    1966 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-qwp4m\" (UniqueName: \"kubernetes.io/projected/85b87341-00c1-4bec-876a-9eabfeb2cb35-kube-api-access-qwp4m\") on node \"addons-447248\" DevicePath \"\""
	Sep 10 17:43:02 addons-447248 kubelet[1966]: I0910 17:43:02.774129    1966 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a998f90-a892-4121-b82b-dbe047da7b63-kube-api-access-6hwx6" (OuterVolumeSpecName: "kube-api-access-6hwx6") pod "8a998f90-a892-4121-b82b-dbe047da7b63" (UID: "8a998f90-a892-4121-b82b-dbe047da7b63"). InnerVolumeSpecName "kube-api-access-6hwx6". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 10 17:43:02 addons-447248 kubelet[1966]: I0910 17:43:02.872883    1966 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-6hwx6\" (UniqueName: \"kubernetes.io/projected/8a998f90-a892-4121-b82b-dbe047da7b63-kube-api-access-6hwx6\") on node \"addons-447248\" DevicePath \"\""
	Sep 10 17:43:03 addons-447248 kubelet[1966]: I0910 17:43:03.103269    1966 scope.go:117] "RemoveContainer" containerID="c4c8b4e7237dc25625807a0fe46b4188de565ff3f5c087fa2378da5b3ca9f3d2"
	Sep 10 17:43:03 addons-447248 kubelet[1966]: I0910 17:43:03.162090    1966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6859bb53-6de9-4a60-b766-4a4c42e9d25d" path="/var/lib/kubelet/pods/6859bb53-6de9-4a60-b766-4a4c42e9d25d/volumes"
	Sep 10 17:43:03 addons-447248 kubelet[1966]: I0910 17:43:03.162398    1966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85b87341-00c1-4bec-876a-9eabfeb2cb35" path="/var/lib/kubelet/pods/85b87341-00c1-4bec-876a-9eabfeb2cb35/volumes"
	Sep 10 17:43:03 addons-447248 kubelet[1966]: I0910 17:43:03.168563    1966 scope.go:117] "RemoveContainer" containerID="c4c8b4e7237dc25625807a0fe46b4188de565ff3f5c087fa2378da5b3ca9f3d2"
	Sep 10 17:43:03 addons-447248 kubelet[1966]: E0910 17:43:03.169889    1966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: c4c8b4e7237dc25625807a0fe46b4188de565ff3f5c087fa2378da5b3ca9f3d2" containerID="c4c8b4e7237dc25625807a0fe46b4188de565ff3f5c087fa2378da5b3ca9f3d2"
	Sep 10 17:43:03 addons-447248 kubelet[1966]: I0910 17:43:03.169920    1966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"c4c8b4e7237dc25625807a0fe46b4188de565ff3f5c087fa2378da5b3ca9f3d2"} err="failed to get container status \"c4c8b4e7237dc25625807a0fe46b4188de565ff3f5c087fa2378da5b3ca9f3d2\": rpc error: code = Unknown desc = Error response from daemon: No such container: c4c8b4e7237dc25625807a0fe46b4188de565ff3f5c087fa2378da5b3ca9f3d2"
	Sep 10 17:43:03 addons-447248 kubelet[1966]: I0910 17:43:03.169941    1966 scope.go:117] "RemoveContainer" containerID="da939059963da96730fd283573b6f31501f86782f4cce02173feceeae8da48f7"
	Sep 10 17:43:03 addons-447248 kubelet[1966]: I0910 17:43:03.188433    1966 scope.go:117] "RemoveContainer" containerID="da939059963da96730fd283573b6f31501f86782f4cce02173feceeae8da48f7"
	Sep 10 17:43:03 addons-447248 kubelet[1966]: E0910 17:43:03.190048    1966 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: da939059963da96730fd283573b6f31501f86782f4cce02173feceeae8da48f7" containerID="da939059963da96730fd283573b6f31501f86782f4cce02173feceeae8da48f7"
	Sep 10 17:43:03 addons-447248 kubelet[1966]: I0910 17:43:03.190088    1966 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"da939059963da96730fd283573b6f31501f86782f4cce02173feceeae8da48f7"} err="failed to get container status \"da939059963da96730fd283573b6f31501f86782f4cce02173feceeae8da48f7\": rpc error: code = Unknown desc = Error response from daemon: No such container: da939059963da96730fd283573b6f31501f86782f4cce02173feceeae8da48f7"
	
	
	==> storage-provisioner [9b5fc0da663e] <==
	I0910 17:30:26.803345       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0910 17:30:26.855510       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0910 17:30:26.855659       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0910 17:30:27.055340       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0910 17:30:27.055533       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-447248_78c776a9-9cc3-468c-ada0-0e8f828f2bf0!
	I0910 17:30:27.056501       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7ee29b0a-75b2-49d7-853a-27e694354392", APIVersion:"v1", ResourceVersion:"668", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-447248_78c776a9-9cc3-468c-ada0-0e8f828f2bf0 became leader
	I0910 17:30:27.360354       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-447248_78c776a9-9cc3-468c-ada0-0e8f828f2bf0!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-447248 -n addons-447248
helpers_test.go:261: (dbg) Run:  kubectl --context addons-447248 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-447248 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-447248 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-447248/192.168.39.59
	Start Time:       Tue, 10 Sep 2024 17:33:48 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.28
	IPs:
	  IP:  10.244.0.28
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wddsr (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-wddsr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m16s                   default-scheduler  Successfully assigned default/busybox to addons-447248
	  Normal   Pulling    7m50s (x4 over 9m15s)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m50s (x4 over 9m15s)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m50s (x4 over 9m15s)   kubelet            Error: ErrImagePull
	  Warning  Failed     7m37s (x6 over 9m14s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m10s (x21 over 9m14s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (73.47s)

                                                
                                    

Test pass (309/341)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 13.57
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.0/json-events 4.13
13 TestDownloadOnly/v1.31.0/preload-exists 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.06
18 TestDownloadOnly/v1.31.0/DeleteAll 0.14
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.59
22 TestOffline 69.58
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 221.01
29 TestAddons/serial/Volcano 41.93
31 TestAddons/serial/GCPAuth/Namespaces 0.13
34 TestAddons/parallel/Ingress 21.93
35 TestAddons/parallel/InspektorGadget 10.84
36 TestAddons/parallel/MetricsServer 6.71
37 TestAddons/parallel/HelmTiller 11.85
39 TestAddons/parallel/CSI 62.7
40 TestAddons/parallel/Headlamp 14.21
41 TestAddons/parallel/CloudSpanner 5.54
42 TestAddons/parallel/LocalPath 55.5
43 TestAddons/parallel/NvidiaDevicePlugin 6.4
44 TestAddons/parallel/Yakd 10.86
45 TestAddons/StoppedEnableDisable 8.54
46 TestCertOptions 75.04
47 TestCertExpiration 320.32
48 TestDockerFlags 54.48
49 TestForceSystemdFlag 76.69
50 TestForceSystemdEnv 85.9
52 TestKVMDriverInstallOrUpdate 5.17
56 TestErrorSpam/setup 48.6
57 TestErrorSpam/start 0.34
58 TestErrorSpam/status 0.7
59 TestErrorSpam/pause 1.19
60 TestErrorSpam/unpause 1.41
61 TestErrorSpam/stop 5.98
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 88.42
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 41.46
68 TestFunctional/serial/KubeContext 0.04
69 TestFunctional/serial/KubectlGetPods 0.08
72 TestFunctional/serial/CacheCmd/cache/add_remote 2.39
73 TestFunctional/serial/CacheCmd/cache/add_local 1.26
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
75 TestFunctional/serial/CacheCmd/cache/list 0.04
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.21
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.2
78 TestFunctional/serial/CacheCmd/cache/delete 0.09
79 TestFunctional/serial/MinikubeKubectlCmd 0.1
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
81 TestFunctional/serial/ExtraConfig 42.33
82 TestFunctional/serial/ComponentHealth 0.06
83 TestFunctional/serial/LogsCmd 0.98
84 TestFunctional/serial/LogsFileCmd 0.99
85 TestFunctional/serial/InvalidService 4.45
87 TestFunctional/parallel/ConfigCmd 0.29
88 TestFunctional/parallel/DashboardCmd 13.76
89 TestFunctional/parallel/DryRun 0.27
90 TestFunctional/parallel/InternationalLanguage 0.14
91 TestFunctional/parallel/StatusCmd 0.75
95 TestFunctional/parallel/ServiceCmdConnect 25.49
96 TestFunctional/parallel/AddonsCmd 0.12
97 TestFunctional/parallel/PersistentVolumeClaim 47.28
99 TestFunctional/parallel/SSHCmd 0.4
100 TestFunctional/parallel/CpCmd 1.29
101 TestFunctional/parallel/MySQL 31.96
102 TestFunctional/parallel/FileSync 0.21
103 TestFunctional/parallel/CertSync 1.31
107 TestFunctional/parallel/NodeLabels 0.06
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.23
111 TestFunctional/parallel/License 0.19
112 TestFunctional/parallel/ImageCommands/ImageListShort 0.19
113 TestFunctional/parallel/Version/short 0.06
114 TestFunctional/parallel/Version/components 0.56
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.29
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
118 TestFunctional/parallel/ImageCommands/ImageBuild 3.43
119 TestFunctional/parallel/ImageCommands/Setup 1.59
129 TestFunctional/parallel/DockerEnv/bash 0.81
130 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
131 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.08
132 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.08
133 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.05
134 TestFunctional/parallel/ProfileCmd/profile_not_create 0.31
135 TestFunctional/parallel/ProfileCmd/profile_list 0.29
136 TestFunctional/parallel/ProfileCmd/profile_json_output 0.28
137 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.83
138 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.49
139 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.39
140 TestFunctional/parallel/ImageCommands/ImageRemove 0.44
141 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.68
142 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.42
143 TestFunctional/parallel/ServiceCmd/DeployApp 21.24
144 TestFunctional/parallel/ServiceCmd/List 0.44
145 TestFunctional/parallel/MountCmd/any-port 7.34
146 TestFunctional/parallel/ServiceCmd/JSONOutput 0.51
147 TestFunctional/parallel/ServiceCmd/HTTPS 0.32
148 TestFunctional/parallel/ServiceCmd/Format 0.32
149 TestFunctional/parallel/ServiceCmd/URL 0.29
150 TestFunctional/parallel/MountCmd/specific-port 1.55
151 TestFunctional/parallel/MountCmd/VerifyCleanup 1.52
152 TestFunctional/delete_echo-server_images 0.04
153 TestFunctional/delete_my-image_image 0.02
154 TestFunctional/delete_minikube_cached_images 0.01
155 TestGvisorAddon 193.3
158 TestMultiControlPlane/serial/StartCluster 212.22
159 TestMultiControlPlane/serial/DeployApp 5.31
160 TestMultiControlPlane/serial/PingHostFromPods 1.21
161 TestMultiControlPlane/serial/AddWorkerNode 62.31
162 TestMultiControlPlane/serial/NodeLabels 0.07
163 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.52
164 TestMultiControlPlane/serial/CopyFile 12.24
165 TestMultiControlPlane/serial/StopSecondaryNode 13.19
166 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.38
167 TestMultiControlPlane/serial/RestartSecondaryNode 42.82
168 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.54
169 TestMultiControlPlane/serial/RestartClusterKeepsNodes 255.8
170 TestMultiControlPlane/serial/DeleteSecondaryNode 7.08
171 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.36
172 TestMultiControlPlane/serial/StopCluster 38.27
173 TestMultiControlPlane/serial/RestartCluster 167.23
174 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.35
175 TestMultiControlPlane/serial/AddSecondaryNode 83.49
176 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.52
179 TestImageBuild/serial/Setup 45.47
180 TestImageBuild/serial/NormalBuild 2.13
181 TestImageBuild/serial/BuildWithBuildArg 1.14
182 TestImageBuild/serial/BuildWithDockerIgnore 1.11
183 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.85
187 TestJSONOutput/start/Command 60.4
188 TestJSONOutput/start/Audit 0
190 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/pause/Command 0.56
194 TestJSONOutput/pause/Audit 0
196 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/unpause/Command 0.5
200 TestJSONOutput/unpause/Audit 0
202 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
205 TestJSONOutput/stop/Command 7.46
206 TestJSONOutput/stop/Audit 0
208 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
209 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
210 TestErrorJSONOutput 0.2
215 TestMainNoArgs 0.04
216 TestMinikubeProfile 100.07
219 TestMountStart/serial/StartWithMountFirst 27.74
220 TestMountStart/serial/VerifyMountFirst 0.35
221 TestMountStart/serial/StartWithMountSecond 30.86
222 TestMountStart/serial/VerifyMountSecond 0.37
223 TestMountStart/serial/DeleteFirst 0.91
224 TestMountStart/serial/VerifyMountPostDelete 0.37
225 TestMountStart/serial/Stop 2.36
226 TestMountStart/serial/RestartStopped 27.41
227 TestMountStart/serial/VerifyMountPostStop 0.37
230 TestMultiNode/serial/FreshStart2Nodes 126.79
231 TestMultiNode/serial/DeployApp2Nodes 4.45
232 TestMultiNode/serial/PingHostFrom2Pods 0.83
233 TestMultiNode/serial/AddNode 59.45
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.2
236 TestMultiNode/serial/CopyFile 7.05
237 TestMultiNode/serial/StopNode 3.31
238 TestMultiNode/serial/StartAfterStop 42.14
239 TestMultiNode/serial/RestartKeepsNodes 175.03
240 TestMultiNode/serial/DeleteNode 2.01
241 TestMultiNode/serial/StopMultiNode 24.98
242 TestMultiNode/serial/RestartMultiNode 228.34
243 TestMultiNode/serial/ValidateNameConflict 48.53
248 TestPreload 147.62
250 TestScheduledStopUnix 118.15
251 TestSkaffold 127.63
254 TestRunningBinaryUpgrade 172.47
256 TestKubernetesUpgrade 228.23
276 TestStoppedBinaryUpgrade/Setup 0.36
277 TestStoppedBinaryUpgrade/Upgrade 216.67
278 TestStoppedBinaryUpgrade/MinikubeLogs 1.3
280 TestPause/serial/Start 89.2
282 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
283 TestNoKubernetes/serial/StartWithK8s 62.16
284 TestPause/serial/SecondStartNoReconfiguration 44.71
285 TestNetworkPlugins/group/auto/Start 78.41
286 TestNoKubernetes/serial/StartWithStopK8s 27.61
287 TestPause/serial/Pause 0.65
288 TestPause/serial/VerifyStatus 0.31
289 TestPause/serial/Unpause 0.64
290 TestPause/serial/PauseAgain 0.75
291 TestPause/serial/DeletePaused 1.89
292 TestNetworkPlugins/group/kindnet/Start 80.63
293 TestPause/serial/VerifyDeletedResources 0.37
294 TestNetworkPlugins/group/calico/Start 122
295 TestNoKubernetes/serial/Start 82.12
296 TestNetworkPlugins/group/auto/KubeletFlags 0.4
297 TestNetworkPlugins/group/auto/NetCatPod 13.27
298 TestNetworkPlugins/group/auto/DNS 0.17
299 TestNetworkPlugins/group/auto/Localhost 0.15
300 TestNetworkPlugins/group/auto/HairPin 0.15
301 TestNetworkPlugins/group/custom-flannel/Start 93.78
302 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
303 TestNetworkPlugins/group/kindnet/KubeletFlags 0.21
304 TestNetworkPlugins/group/kindnet/NetCatPod 12.25
305 TestNoKubernetes/serial/VerifyK8sNotRunning 0.23
306 TestNoKubernetes/serial/ProfileList 1.22
307 TestNoKubernetes/serial/Stop 2.32
308 TestNoKubernetes/serial/StartNoArgs 43.27
309 TestNetworkPlugins/group/kindnet/DNS 0.23
310 TestNetworkPlugins/group/kindnet/Localhost 0.14
311 TestNetworkPlugins/group/kindnet/HairPin 0.18
312 TestNetworkPlugins/group/false/Start 115.47
313 TestNetworkPlugins/group/calico/ControllerPod 6.01
314 TestNetworkPlugins/group/calico/KubeletFlags 0.2
315 TestNetworkPlugins/group/calico/NetCatPod 12.24
316 TestNetworkPlugins/group/calico/DNS 0.18
317 TestNetworkPlugins/group/calico/Localhost 0.17
318 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.23
319 TestNetworkPlugins/group/calico/HairPin 0.18
320 TestNetworkPlugins/group/enable-default-cni/Start 84.14
321 TestNetworkPlugins/group/flannel/Start 107.53
322 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.21
323 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.26
324 TestNetworkPlugins/group/custom-flannel/DNS 0.21
325 TestNetworkPlugins/group/custom-flannel/Localhost 0.21
326 TestNetworkPlugins/group/custom-flannel/HairPin 0.2
327 TestNetworkPlugins/group/kubenet/Start 96.27
328 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.21
329 TestNetworkPlugins/group/enable-default-cni/NetCatPod 14.05
330 TestNetworkPlugins/group/false/KubeletFlags 0.26
331 TestNetworkPlugins/group/false/NetCatPod 10.41
332 TestNetworkPlugins/group/false/DNS 0.22
333 TestNetworkPlugins/group/false/Localhost 0.17
334 TestNetworkPlugins/group/enable-default-cni/DNS 0.23
335 TestNetworkPlugins/group/false/HairPin 0.16
336 TestNetworkPlugins/group/enable-default-cni/Localhost 0.19
337 TestNetworkPlugins/group/enable-default-cni/HairPin 0.19
338 TestNetworkPlugins/group/bridge/Start 96.82
340 TestStartStop/group/old-k8s-version/serial/FirstStart 202.75
341 TestNetworkPlugins/group/flannel/ControllerPod 6.01
342 TestNetworkPlugins/group/flannel/KubeletFlags 0.21
343 TestNetworkPlugins/group/flannel/NetCatPod 10.23
344 TestNetworkPlugins/group/flannel/DNS 0.22
345 TestNetworkPlugins/group/flannel/Localhost 0.14
346 TestNetworkPlugins/group/flannel/HairPin 0.15
347 TestNetworkPlugins/group/kubenet/KubeletFlags 0.21
348 TestNetworkPlugins/group/kubenet/NetCatPod 10.26
349 TestNetworkPlugins/group/kubenet/DNS 0.23
350 TestNetworkPlugins/group/kubenet/Localhost 0.21
351 TestNetworkPlugins/group/kubenet/HairPin 0.19
353 TestStartStop/group/no-preload/serial/FirstStart 127.62
355 TestStartStop/group/embed-certs/serial/FirstStart 136.7
356 TestNetworkPlugins/group/bridge/KubeletFlags 0.24
357 TestNetworkPlugins/group/bridge/NetCatPod 11.42
358 TestNetworkPlugins/group/bridge/DNS 0.2
359 TestNetworkPlugins/group/bridge/Localhost 0.15
360 TestNetworkPlugins/group/bridge/HairPin 0.16
362 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 70.9
363 TestStartStop/group/no-preload/serial/DeployApp 8.32
364 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.08
365 TestStartStop/group/no-preload/serial/Stop 13.35
366 TestStartStop/group/embed-certs/serial/DeployApp 9.34
367 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
368 TestStartStop/group/no-preload/serial/SecondStart 299.48
369 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.98
370 TestStartStop/group/embed-certs/serial/Stop 13.33
371 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.34
372 TestStartStop/group/old-k8s-version/serial/DeployApp 9.48
373 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.99
374 TestStartStop/group/default-k8s-diff-port/serial/Stop 13.34
375 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
376 TestStartStop/group/embed-certs/serial/SecondStart 301.51
377 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.96
378 TestStartStop/group/old-k8s-version/serial/Stop 13.38
379 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
380 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 329.63
381 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
382 TestStartStop/group/old-k8s-version/serial/SecondStart 429.95
383 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
384 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
385 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.21
386 TestStartStop/group/no-preload/serial/Pause 2.64
388 TestStartStop/group/newest-cni/serial/FirstStart 63.72
389 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 12.01
390 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
391 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.22
392 TestStartStop/group/embed-certs/serial/Pause 2.77
393 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 7.01
394 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.09
395 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.23
396 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.64
397 TestStartStop/group/newest-cni/serial/DeployApp 0
398 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.91
399 TestStartStop/group/newest-cni/serial/Stop 13.34
400 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
401 TestStartStop/group/newest-cni/serial/SecondStart 37.54
402 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
403 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
404 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.21
405 TestStartStop/group/newest-cni/serial/Pause 2.24
406 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
407 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
408 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.19
409 TestStartStop/group/old-k8s-version/serial/Pause 2.27
x
+
TestDownloadOnly/v1.20.0/json-events (13.57s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-794328 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-794328 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=kvm2 : (13.570191431s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (13.57s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-794328
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-794328: exit status 85 (57.805502ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-794328 | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC |          |
	|         | -p download-only-794328        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/10 17:29:06
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0910 17:29:06.063089   13158 out.go:345] Setting OutFile to fd 1 ...
	I0910 17:29:06.063185   13158 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:29:06.063190   13158 out.go:358] Setting ErrFile to fd 2...
	I0910 17:29:06.063194   13158 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:29:06.063348   13158 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-5970/.minikube/bin
	W0910 17:29:06.063456   13158 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19598-5970/.minikube/config/config.json: open /home/jenkins/minikube-integration/19598-5970/.minikube/config/config.json: no such file or directory
	I0910 17:29:06.064030   13158 out.go:352] Setting JSON to true
	I0910 17:29:06.064959   13158 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":695,"bootTime":1725988651,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0910 17:29:06.065019   13158 start.go:139] virtualization: kvm guest
	I0910 17:29:06.067575   13158 out.go:97] [download-only-794328] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0910 17:29:06.067696   13158 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19598-5970/.minikube/cache/preloaded-tarball: no such file or directory
	I0910 17:29:06.067706   13158 notify.go:220] Checking for updates...
	I0910 17:29:06.069499   13158 out.go:169] MINIKUBE_LOCATION=19598
	I0910 17:29:06.071293   13158 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 17:29:06.073015   13158 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19598-5970/kubeconfig
	I0910 17:29:06.074780   13158 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-5970/.minikube
	I0910 17:29:06.076301   13158 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0910 17:29:06.079178   13158 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0910 17:29:06.079455   13158 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 17:29:06.186125   13158 out.go:97] Using the kvm2 driver based on user configuration
	I0910 17:29:06.186150   13158 start.go:297] selected driver: kvm2
	I0910 17:29:06.186156   13158 start.go:901] validating driver "kvm2" against <nil>
	I0910 17:29:06.186482   13158 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 17:29:06.186631   13158 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19598-5970/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0910 17:29:06.201645   13158 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0910 17:29:06.201704   13158 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0910 17:29:06.202187   13158 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0910 17:29:06.202340   13158 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0910 17:29:06.202404   13158 cni.go:84] Creating CNI manager for ""
	I0910 17:29:06.202420   13158 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0910 17:29:06.202490   13158 start.go:340] cluster config:
	{Name:download-only-794328 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-794328 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 17:29:06.202704   13158 iso.go:125] acquiring lock: {Name:mk102d590109224a2b8dd000e4c8f825ff8b3e36 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 17:29:06.204744   13158 out.go:97] Downloading VM boot image ...
	I0910 17:29:06.204786   13158 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19598-5970/.minikube/cache/iso/amd64/minikube-v1.34.0-1725912912-19598-amd64.iso
	I0910 17:29:15.074475   13158 out.go:97] Starting "download-only-794328" primary control-plane node in "download-only-794328" cluster
	I0910 17:29:15.074503   13158 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0910 17:29:15.105032   13158 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0910 17:29:15.105072   13158 cache.go:56] Caching tarball of preloaded images
	I0910 17:29:15.105230   13158 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0910 17:29:15.107196   13158 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0910 17:29:15.107212   13158 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0910 17:29:15.138089   13158 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /home/jenkins/minikube-integration/19598-5970/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-794328 host does not exist
	  To start a cluster, run: "minikube start -p download-only-794328"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-794328
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (4.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-418478 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-418478 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=kvm2 : (4.133033828s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (4.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-418478
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-418478: exit status 85 (56.873643ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-794328 | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC |                     |
	|         | -p download-only-794328        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC | 10 Sep 24 17:29 UTC |
	| delete  | -p download-only-794328        | download-only-794328 | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC | 10 Sep 24 17:29 UTC |
	| start   | -o=json --download-only        | download-only-418478 | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC |                     |
	|         | -p download-only-418478        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/10 17:29:19
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0910 17:29:19.955199   13384 out.go:345] Setting OutFile to fd 1 ...
	I0910 17:29:19.955461   13384 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:29:19.955470   13384 out.go:358] Setting ErrFile to fd 2...
	I0910 17:29:19.955474   13384 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:29:19.955660   13384 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-5970/.minikube/bin
	I0910 17:29:19.956184   13384 out.go:352] Setting JSON to true
	I0910 17:29:19.957005   13384 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":709,"bootTime":1725988651,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0910 17:29:19.957058   13384 start.go:139] virtualization: kvm guest
	I0910 17:29:19.959224   13384 out.go:97] [download-only-418478] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0910 17:29:19.959364   13384 notify.go:220] Checking for updates...
	I0910 17:29:19.960904   13384 out.go:169] MINIKUBE_LOCATION=19598
	I0910 17:29:19.962379   13384 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 17:29:19.963806   13384 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19598-5970/kubeconfig
	I0910 17:29:19.965125   13384 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-5970/.minikube
	I0910 17:29:19.966292   13384 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-418478 host does not exist
	  To start a cluster, run: "minikube start -p download-only-418478"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-418478
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-654438 --alsologtostderr --binary-mirror http://127.0.0.1:33519 --driver=kvm2 
helpers_test.go:175: Cleaning up "binary-mirror-654438" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-654438
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestOffline (69.58s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-299759 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-299759 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 : (1m8.567015579s)
helpers_test.go:175: Cleaning up "offline-docker-299759" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-299759
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-299759: (1.008445673s)
--- PASS: TestOffline (69.58s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-447248
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-447248: exit status 85 (50.693143ms)

                                                
                                                
-- stdout --
	* Profile "addons-447248" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-447248"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-447248
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-447248: exit status 85 (47.890639ms)

                                                
                                                
-- stdout --
	* Profile "addons-447248" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-447248"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (221.01s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-447248 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-447248 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m41.011809973s)
--- PASS: TestAddons/Setup (221.01s)

                                                
                                    
x
+
TestAddons/serial/Volcano (41.93s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:905: volcano-admission stabilized in 19.785817ms
addons_test.go:913: volcano-controller stabilized in 19.833873ms
addons_test.go:897: volcano-scheduler stabilized in 21.833378ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-vvpjp" [c79cceae-763c-4f34-887e-8aad3a759c95] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.005148194s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-xnssz" [6802193a-f2df-4f64-a9f8-d52d19a7f3f0] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.005244796s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-m8wmk" [c81362df-a87b-4874-ad2e-22aea9a311c0] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.005020216s
addons_test.go:932: (dbg) Run:  kubectl --context addons-447248 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-447248 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-447248 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [f3f0d51b-197c-407b-8336-196423b47c4b] Pending
helpers_test.go:344: "test-job-nginx-0" [f3f0d51b-197c-407b-8336-196423b47c4b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [f3f0d51b-197c-407b-8336-196423b47c4b] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 16.00437495s
addons_test.go:968: (dbg) Run:  out/minikube-linux-amd64 -p addons-447248 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-linux-amd64 -p addons-447248 addons disable volcano --alsologtostderr -v=1: (10.529912423s)
--- PASS: TestAddons/serial/Volcano (41.93s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-447248 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-447248 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (21.93s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-447248 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-447248 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-447248 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [da8981ff-c62b-4d8a-b05d-f0cd18621ee0] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [da8981ff-c62b-4d8a-b05d-f0cd18621ee0] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.003819074s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-447248 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-447248 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-447248 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.59
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-447248 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-447248 addons disable ingress-dns --alsologtostderr -v=1: (1.711092829s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-447248 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-447248 addons disable ingress --alsologtostderr -v=1: (8.083975083s)
--- PASS: TestAddons/parallel/Ingress (21.93s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.84s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-z522r" [4c9a752d-26d5-4423-9ec1-3c28d6705a1d] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004442467s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-447248
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-447248: (5.83732038s)
--- PASS: TestAddons/parallel/InspektorGadget (10.84s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.71s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 3.16286ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-j6wml" [211ad8e2-5d48-416b-9ba8-e4ddb773a576] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004231126s
addons_test.go:417: (dbg) Run:  kubectl --context addons-447248 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-447248 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.71s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.85s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 5.781855ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-925zh" [0f2d7fe9-69b8-477a-8a9d-a285eb7bfd9a] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.004363205s
addons_test.go:475: (dbg) Run:  kubectl --context addons-447248 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-447248 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.348331205s)
addons_test.go:480: kubectl --context addons-447248 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
warning: couldn't attach to pod/helm-test, falling back to streaming logs: error stream protocol error: unknown error
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-447248 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.85s)

                                                
                                    
x
+
TestAddons/parallel/CSI (62.7s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 5.893184ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-447248 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-447248 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-447248 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-447248 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-447248 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-447248 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-447248 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-447248 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-447248 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-447248 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-447248 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-447248 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-447248 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-447248 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-447248 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-447248 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-447248 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-447248 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-447248 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-447248 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [94c60613-4a7f-4238-8064-41c6d8bb974b] Pending
helpers_test.go:344: "task-pv-pod" [94c60613-4a7f-4238-8064-41c6d8bb974b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [94c60613-4a7f-4238-8064-41c6d8bb974b] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.003249123s
addons_test.go:590: (dbg) Run:  kubectl --context addons-447248 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-447248 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-447248 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-447248 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-447248 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-447248 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-447248 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-447248 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-447248 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-447248 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-447248 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-447248 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-447248 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-447248 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-447248 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-447248 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-447248 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-447248 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-447248 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-447248 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-447248 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-447248 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-447248 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-447248 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-447248 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [c965ad70-a117-417f-96f1-e17358b12c37] Pending
helpers_test.go:344: "task-pv-pod-restore" [c965ad70-a117-417f-96f1-e17358b12c37] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [c965ad70-a117-417f-96f1-e17358b12c37] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.004855639s
addons_test.go:632: (dbg) Run:  kubectl --context addons-447248 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Done: kubectl --context addons-447248 delete pod task-pv-pod-restore: (1.449661317s)
addons_test.go:636: (dbg) Run:  kubectl --context addons-447248 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-447248 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-447248 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p addons-447248 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.645975682s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p addons-447248 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (62.70s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (14.21s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-447248 --alsologtostderr -v=1
addons_test.go:830: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-447248 --alsologtostderr -v=1: (1.049756049s)
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-hdfcd" [347e7a9f-c526-4a95-a758-c0a7f261fbfe] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-hdfcd" [347e7a9f-c526-4a95-a758-c0a7f261fbfe] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.003798696s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-447248 addons disable headlamp --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Headlamp (14.21s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.54s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-8p5vw" [9567f0ed-7d9f-453b-8a5c-38b6399e533e] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.00423532s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-447248
--- PASS: TestAddons/parallel/CloudSpanner (5.54s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (55.5s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-447248 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-447248 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-447248 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-447248 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-447248 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-447248 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-447248 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-447248 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-447248 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [788e39cb-fa27-40ca-a6d7-d21acfa51eb6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [788e39cb-fa27-40ca-a6d7-d21acfa51eb6] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [788e39cb-fa27-40ca-a6d7-d21acfa51eb6] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.004141778s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-447248 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-amd64 -p addons-447248 ssh "cat /opt/local-path-provisioner/pvc-75c4d344-3c02-4cb9-bec5-00ae17ac00c0_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-447248 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-447248 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p addons-447248 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-amd64 -p addons-447248 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.656339025s)
--- PASS: TestAddons/parallel/LocalPath (55.50s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.4s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-zwwn8" [35ae1fa5-59e3-488b-ba97-e0cfeba39e93] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003871392s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-447248
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.40s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.86s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-4ws8d" [3d6299e8-70de-44f4-9ba9-a37ba7d882ae] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.008010583s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-447248 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-447248 addons disable yakd --alsologtostderr -v=1: (5.846824844s)
--- PASS: TestAddons/parallel/Yakd (10.86s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (8.54s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-447248
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-447248: (8.283206708s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-447248
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-447248
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-447248
--- PASS: TestAddons/StoppedEnableDisable (8.54s)

                                                
                                    
x
+
TestCertOptions (75.04s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-886487 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-886487 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 : (1m13.579156397s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-886487 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-886487 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-886487 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-886487" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-886487
--- PASS: TestCertOptions (75.04s)

                                                
                                    
x
+
TestCertExpiration (320.32s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-712693 --memory=2048 --cert-expiration=3m --driver=kvm2 
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-712693 --memory=2048 --cert-expiration=3m --driver=kvm2 : (1m38.939098303s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-712693 --memory=2048 --cert-expiration=8760h --driver=kvm2 
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-712693 --memory=2048 --cert-expiration=8760h --driver=kvm2 : (40.227219345s)
helpers_test.go:175: Cleaning up "cert-expiration-712693" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-712693
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-712693: (1.152481431s)
--- PASS: TestCertExpiration (320.32s)

                                                
                                    
x
+
TestDockerFlags (54.48s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-757079 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-757079 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 : (53.177560643s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-757079 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-757079 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-757079" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-757079
--- PASS: TestDockerFlags (54.48s)

                                                
                                    
x
+
TestForceSystemdFlag (76.69s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-381044 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-381044 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 : (1m15.664235718s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-381044 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-381044" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-381044
--- PASS: TestForceSystemdFlag (76.69s)

                                                
                                    
x
+
TestForceSystemdEnv (85.9s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-303625 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 
E0910 18:31:56.657492   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/skaffold-089494/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:31:56.664011   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/skaffold-089494/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:31:56.675521   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/skaffold-089494/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:31:56.697082   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/skaffold-089494/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:31:56.738620   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/skaffold-089494/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:31:56.820155   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/skaffold-089494/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:31:56.981734   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/skaffold-089494/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:31:57.303481   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/skaffold-089494/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:31:57.945032   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/skaffold-089494/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:31:59.226704   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/skaffold-089494/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:32:01.788975   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/skaffold-089494/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:32:06.910434   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/skaffold-089494/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-303625 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 : (1m24.239090952s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-303625 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-303625" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-303625
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-303625: (1.356817623s)
--- PASS: TestForceSystemdEnv (85.90s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (5.17s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (5.17s)

                                                
                                    
x
+
TestErrorSpam/setup (48.6s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-650289 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-650289 --driver=kvm2 
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-650289 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-650289 --driver=kvm2 : (48.600224832s)
--- PASS: TestErrorSpam/setup (48.60s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-650289 --log_dir /tmp/nospam-650289 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-650289 --log_dir /tmp/nospam-650289 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-650289 --log_dir /tmp/nospam-650289 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.7s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-650289 --log_dir /tmp/nospam-650289 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-650289 --log_dir /tmp/nospam-650289 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-650289 --log_dir /tmp/nospam-650289 status
--- PASS: TestErrorSpam/status (0.70s)

                                                
                                    
x
+
TestErrorSpam/pause (1.19s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-650289 --log_dir /tmp/nospam-650289 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-650289 --log_dir /tmp/nospam-650289 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-650289 --log_dir /tmp/nospam-650289 pause
--- PASS: TestErrorSpam/pause (1.19s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.41s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-650289 --log_dir /tmp/nospam-650289 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-650289 --log_dir /tmp/nospam-650289 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-650289 --log_dir /tmp/nospam-650289 unpause
--- PASS: TestErrorSpam/unpause (1.41s)

                                                
                                    
x
+
TestErrorSpam/stop (5.98s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-650289 --log_dir /tmp/nospam-650289 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-650289 --log_dir /tmp/nospam-650289 stop: (3.417709977s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-650289 --log_dir /tmp/nospam-650289 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-650289 --log_dir /tmp/nospam-650289 stop: (1.209969826s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-650289 --log_dir /tmp/nospam-650289 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-650289 --log_dir /tmp/nospam-650289 stop: (1.355286313s)
--- PASS: TestErrorSpam/stop (5.98s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19598-5970/.minikube/files/etc/test/nested/copy/13146/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (88.42s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-256199 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-256199 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 : (1m28.422477389s)
--- PASS: TestFunctional/serial/StartWithProxy (88.42s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (41.46s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-256199 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-256199 --alsologtostderr -v=8: (41.457833882s)
functional_test.go:663: soft start took 41.458498895s for "functional-256199" cluster.
--- PASS: TestFunctional/serial/SoftStart (41.46s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-256199 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.39s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.39s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-256199 /tmp/TestFunctionalserialCacheCmdcacheadd_local2750578780/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 cache add minikube-local-cache-test:functional-256199
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 cache delete minikube-local-cache-test:functional-256199
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-256199
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-256199 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (227.301668ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 kubectl -- --context functional-256199 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-256199 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (42.33s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-256199 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-256199 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (42.325461011s)
functional_test.go:761: restart took 42.325590067s for "functional-256199" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (42.33s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-256199 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.98s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 logs
--- PASS: TestFunctional/serial/LogsCmd (0.98s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.99s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 logs --file /tmp/TestFunctionalserialLogsFileCmd891673551/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.99s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.45s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-256199 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-256199
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-256199: exit status 115 (279.738537ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.39:30133 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-256199 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.45s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-256199 config get cpus: exit status 14 (41.818015ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-256199 config get cpus: exit status 14 (45.835776ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-256199 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-256199 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 23209: os: process already finished
E0910 17:48:06.276087   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/addons-447248/client.crt: no such file or directory" logger="UnhandledError"
E0910 17:48:06.282871   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/addons-447248/client.crt: no such file or directory" logger="UnhandledError"
E0910 17:48:06.294252   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/addons-447248/client.crt: no such file or directory" logger="UnhandledError"
E0910 17:48:06.315756   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/addons-447248/client.crt: no such file or directory" logger="UnhandledError"
E0910 17:48:06.357144   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/addons-447248/client.crt: no such file or directory" logger="UnhandledError"
E0910 17:48:06.438649   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/addons-447248/client.crt: no such file or directory" logger="UnhandledError"
E0910 17:48:06.600240   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/addons-447248/client.crt: no such file or directory" logger="UnhandledError"
E0910 17:48:06.921982   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/addons-447248/client.crt: no such file or directory" logger="UnhandledError"
E0910 17:48:07.563764   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/addons-447248/client.crt: no such file or directory" logger="UnhandledError"
E0910 17:48:08.845327   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/addons-447248/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/DashboardCmd (13.76s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-256199 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-256199 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (137.169209ms)

                                                
                                                
-- stdout --
	* [functional-256199] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19598
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19598-5970/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-5970/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 17:47:50.609299   23084 out.go:345] Setting OutFile to fd 1 ...
	I0910 17:47:50.609546   23084 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:47:50.609555   23084 out.go:358] Setting ErrFile to fd 2...
	I0910 17:47:50.609560   23084 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:47:50.609737   23084 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-5970/.minikube/bin
	I0910 17:47:50.610267   23084 out.go:352] Setting JSON to false
	I0910 17:47:50.611222   23084 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":1820,"bootTime":1725988651,"procs":227,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0910 17:47:50.611274   23084 start.go:139] virtualization: kvm guest
	I0910 17:47:50.613481   23084 out.go:177] * [functional-256199] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0910 17:47:50.615025   23084 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 17:47:50.615048   23084 notify.go:220] Checking for updates...
	I0910 17:47:50.617845   23084 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 17:47:50.619355   23084 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19598-5970/kubeconfig
	I0910 17:47:50.620778   23084 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-5970/.minikube
	I0910 17:47:50.622083   23084 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0910 17:47:50.623602   23084 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 17:47:50.625279   23084 config.go:182] Loaded profile config "functional-256199": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 17:47:50.625709   23084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0910 17:47:50.625779   23084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:47:50.640729   23084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36635
	I0910 17:47:50.641169   23084 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:47:50.641779   23084 main.go:141] libmachine: Using API Version  1
	I0910 17:47:50.641805   23084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:47:50.642135   23084 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:47:50.642349   23084 main.go:141] libmachine: (functional-256199) Calling .DriverName
	I0910 17:47:50.642683   23084 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 17:47:50.642971   23084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0910 17:47:50.643007   23084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:47:50.657977   23084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43409
	I0910 17:47:50.658469   23084 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:47:50.658959   23084 main.go:141] libmachine: Using API Version  1
	I0910 17:47:50.658981   23084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:47:50.659264   23084 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:47:50.659424   23084 main.go:141] libmachine: (functional-256199) Calling .DriverName
	I0910 17:47:50.696161   23084 out.go:177] * Using the kvm2 driver based on existing profile
	I0910 17:47:50.697516   23084 start.go:297] selected driver: kvm2
	I0910 17:47:50.697529   23084 start.go:901] validating driver "kvm2" against &{Name:functional-256199 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:functional-256199 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.39 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 17:47:50.697619   23084 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 17:47:50.699916   23084 out.go:201] 
	W0910 17:47:50.701379   23084 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0910 17:47:50.702753   23084 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-256199 --dry-run --alsologtostderr -v=1 --driver=kvm2 
--- PASS: TestFunctional/parallel/DryRun (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-256199 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-256199 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (134.800967ms)

                                                
                                                
-- stdout --
	* [functional-256199] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19598
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19598-5970/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-5970/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 17:47:48.268715   22666 out.go:345] Setting OutFile to fd 1 ...
	I0910 17:47:48.268826   22666 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:47:48.268832   22666 out.go:358] Setting ErrFile to fd 2...
	I0910 17:47:48.268837   22666 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:47:48.269094   22666 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-5970/.minikube/bin
	I0910 17:47:48.269603   22666 out.go:352] Setting JSON to false
	I0910 17:47:48.270523   22666 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":1817,"bootTime":1725988651,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0910 17:47:48.270600   22666 start.go:139] virtualization: kvm guest
	I0910 17:47:48.273023   22666 out.go:177] * [functional-256199] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I0910 17:47:48.274968   22666 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 17:47:48.274985   22666 notify.go:220] Checking for updates...
	I0910 17:47:48.277848   22666 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 17:47:48.279371   22666 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19598-5970/kubeconfig
	I0910 17:47:48.280868   22666 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-5970/.minikube
	I0910 17:47:48.282226   22666 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0910 17:47:48.283762   22666 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 17:47:48.285476   22666 config.go:182] Loaded profile config "functional-256199": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 17:47:48.285901   22666 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0910 17:47:48.285972   22666 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:47:48.303465   22666 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45323
	I0910 17:47:48.303920   22666 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:47:48.304558   22666 main.go:141] libmachine: Using API Version  1
	I0910 17:47:48.304579   22666 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:47:48.304938   22666 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:47:48.305117   22666 main.go:141] libmachine: (functional-256199) Calling .DriverName
	I0910 17:47:48.305358   22666 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 17:47:48.305649   22666 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0910 17:47:48.305684   22666 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:47:48.320856   22666 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33753
	I0910 17:47:48.321232   22666 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:47:48.321739   22666 main.go:141] libmachine: Using API Version  1
	I0910 17:47:48.321769   22666 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:47:48.322070   22666 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:47:48.322250   22666 main.go:141] libmachine: (functional-256199) Calling .DriverName
	I0910 17:47:48.354349   22666 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0910 17:47:48.355683   22666 start.go:297] selected driver: kvm2
	I0910 17:47:48.355696   22666 start.go:901] validating driver "kvm2" against &{Name:functional-256199 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19598/minikube-v1.34.0-1725912912-19598-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:functional-256199 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.39 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 17:47:48.355811   22666 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 17:47:48.358149   22666 out.go:201] 
	W0910 17:47:48.359478   22666 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0910 17:47:48.360686   22666 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (25.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-256199 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-256199 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-rt2b6" [619a0965-91f6-4378-8c81-1f49fac95456] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-rt2b6" [619a0965-91f6-4378-8c81-1f49fac95456] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 25.003591788s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.39:32009
functional_test.go:1675: http://192.168.39.39:32009: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-rt2b6

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.39:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.39:32009
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (25.49s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (47.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [1f0af185-13eb-4911-87d0-8475e5b4202f] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.008819629s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-256199 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-256199 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-256199 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-256199 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-256199 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [2bbbc3bb-fdf4-4fe2-b2a2-77f500aac98c] Pending
helpers_test.go:344: "sp-pod" [2bbbc3bb-fdf4-4fe2-b2a2-77f500aac98c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [2bbbc3bb-fdf4-4fe2-b2a2-77f500aac98c] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 24.004225107s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-256199 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-256199 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-256199 delete -f testdata/storage-provisioner/pod.yaml: (1.066587076s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-256199 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [24e35554-0d41-4822-bae4-ecc982f4c522] Pending
helpers_test.go:344: "sp-pod" [24e35554-0d41-4822-bae4-ecc982f4c522] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [24e35554-0d41-4822-bae4-ecc982f4c522] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.003952691s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-256199 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (47.28s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 ssh -n functional-256199 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 cp functional-256199:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1458795370/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 ssh -n functional-256199 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 ssh -n functional-256199 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (31.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-256199 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-gzkhw" [dd1a53e8-b390-4ba8-8fc2-ee35492badbe] Pending
helpers_test.go:344: "mysql-6cdb49bbb-gzkhw" [dd1a53e8-b390-4ba8-8fc2-ee35492badbe] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-gzkhw" [dd1a53e8-b390-4ba8-8fc2-ee35492badbe] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 23.004662995s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-256199 exec mysql-6cdb49bbb-gzkhw -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-256199 exec mysql-6cdb49bbb-gzkhw -- mysql -ppassword -e "show databases;": exit status 1 (212.202404ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-256199 exec mysql-6cdb49bbb-gzkhw -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-256199 exec mysql-6cdb49bbb-gzkhw -- mysql -ppassword -e "show databases;": exit status 1 (261.841649ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-256199 exec mysql-6cdb49bbb-gzkhw -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-256199 exec mysql-6cdb49bbb-gzkhw -- mysql -ppassword -e "show databases;": exit status 1 (170.363616ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-256199 exec mysql-6cdb49bbb-gzkhw -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-256199 exec mysql-6cdb49bbb-gzkhw -- mysql -ppassword -e "show databases;": exit status 1 (163.551821ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-256199 exec mysql-6cdb49bbb-gzkhw -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (31.96s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/13146/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 ssh "sudo cat /etc/test/nested/copy/13146/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/13146.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 ssh "sudo cat /etc/ssl/certs/13146.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/13146.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 ssh "sudo cat /usr/share/ca-certificates/13146.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/131462.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 ssh "sudo cat /etc/ssl/certs/131462.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/131462.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 ssh "sudo cat /usr/share/ca-certificates/131462.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-256199 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-256199 ssh "sudo systemctl is-active crio": exit status 1 (226.393233ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-256199 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-256199
docker.io/kicbase/echo-server:functional-256199
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-256199 image ls --format short --alsologtostderr:
I0910 17:47:53.304076   23305 out.go:345] Setting OutFile to fd 1 ...
I0910 17:47:53.304349   23305 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0910 17:47:53.304361   23305 out.go:358] Setting ErrFile to fd 2...
I0910 17:47:53.304365   23305 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0910 17:47:53.304575   23305 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-5970/.minikube/bin
I0910 17:47:53.305122   23305 config.go:182] Loaded profile config "functional-256199": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0910 17:47:53.305247   23305 config.go:182] Loaded profile config "functional-256199": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0910 17:47:53.305648   23305 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0910 17:47:53.305696   23305 main.go:141] libmachine: Launching plugin server for driver kvm2
I0910 17:47:53.320234   23305 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42011
I0910 17:47:53.320729   23305 main.go:141] libmachine: () Calling .GetVersion
I0910 17:47:53.321240   23305 main.go:141] libmachine: Using API Version  1
I0910 17:47:53.321260   23305 main.go:141] libmachine: () Calling .SetConfigRaw
I0910 17:47:53.321591   23305 main.go:141] libmachine: () Calling .GetMachineName
I0910 17:47:53.321760   23305 main.go:141] libmachine: (functional-256199) Calling .GetState
I0910 17:47:53.323590   23305 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0910 17:47:53.323632   23305 main.go:141] libmachine: Launching plugin server for driver kvm2
I0910 17:47:53.338033   23305 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38599
I0910 17:47:53.338401   23305 main.go:141] libmachine: () Calling .GetVersion
I0910 17:47:53.338936   23305 main.go:141] libmachine: Using API Version  1
I0910 17:47:53.338960   23305 main.go:141] libmachine: () Calling .SetConfigRaw
I0910 17:47:53.339294   23305 main.go:141] libmachine: () Calling .GetMachineName
I0910 17:47:53.339470   23305 main.go:141] libmachine: (functional-256199) Calling .DriverName
I0910 17:47:53.339645   23305 ssh_runner.go:195] Run: systemctl --version
I0910 17:47:53.339665   23305 main.go:141] libmachine: (functional-256199) Calling .GetSSHHostname
I0910 17:47:53.342637   23305 main.go:141] libmachine: (functional-256199) DBG | domain functional-256199 has defined MAC address 52:54:00:48:30:8f in network mk-functional-256199
I0910 17:47:53.343077   23305 main.go:141] libmachine: (functional-256199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:30:8f", ip: ""} in network mk-functional-256199: {Iface:virbr1 ExpiryTime:2024-09-10 18:44:29 +0000 UTC Type:0 Mac:52:54:00:48:30:8f Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:functional-256199 Clientid:01:52:54:00:48:30:8f}
I0910 17:47:53.343108   23305 main.go:141] libmachine: (functional-256199) DBG | domain functional-256199 has defined IP address 192.168.39.39 and MAC address 52:54:00:48:30:8f in network mk-functional-256199
I0910 17:47:53.343325   23305 main.go:141] libmachine: (functional-256199) Calling .GetSSHPort
I0910 17:47:53.343502   23305 main.go:141] libmachine: (functional-256199) Calling .GetSSHKeyPath
I0910 17:47:53.343650   23305 main.go:141] libmachine: (functional-256199) Calling .GetSSHUsername
I0910 17:47:53.343858   23305 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5970/.minikube/machines/functional-256199/id_rsa Username:docker}
I0910 17:47:53.424728   23305 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0910 17:47:53.450662   23305 main.go:141] libmachine: Making call to close driver server
I0910 17:47:53.450686   23305 main.go:141] libmachine: (functional-256199) Calling .Close
I0910 17:47:53.450960   23305 main.go:141] libmachine: Successfully made call to close driver server
I0910 17:47:53.450979   23305 main.go:141] libmachine: Making call to close connection to plugin binary
I0910 17:47:53.450987   23305 main.go:141] libmachine: Making call to close driver server
I0910 17:47:53.450994   23305 main.go:141] libmachine: (functional-256199) Calling .Close
I0910 17:47:53.450994   23305 main.go:141] libmachine: (functional-256199) DBG | Closing plugin on server side
I0910 17:47:53.451189   23305 main.go:141] libmachine: Successfully made call to close driver server
I0910 17:47:53.451207   23305 main.go:141] libmachine: Making call to close connection to plugin binary
I0910 17:47:53.451304   23305 main.go:141] libmachine: (functional-256199) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-256199 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/kube-controller-manager     | v1.31.0           | 045733566833c | 88.4MB |
| registry.k8s.io/etcd                        | 3.5.15-0          | 2e96e5913fc06 | 148MB  |
| registry.k8s.io/pause                       | 3.10              | 873ed75102791 | 736kB  |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| registry.k8s.io/coredns/coredns             | v1.11.1           | cbb01a7bd410d | 59.8MB |
| registry.k8s.io/kube-scheduler              | v1.31.0           | 1766f54c897f0 | 67.4MB |
| registry.k8s.io/kube-apiserver              | v1.31.0           | 604f5db92eaa8 | 94.2MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| docker.io/library/minikube-local-cache-test | functional-256199 | b1c95235bbca1 | 30B    |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| localhost/my-image                          | functional-256199 | d6eb2795d81ca | 1.24MB |
| docker.io/library/nginx                     | latest            | 39286ab8a5e14 | 188MB  |
| registry.k8s.io/kube-proxy                  | v1.31.0           | ad83b2ca7b09e | 91.5MB |
| docker.io/kicbase/echo-server               | functional-256199 | 9056ab77afb8e | 4.94MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-256199 image ls --format table --alsologtostderr:
I0910 17:47:57.404012   23821 out.go:345] Setting OutFile to fd 1 ...
I0910 17:47:57.404263   23821 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0910 17:47:57.404273   23821 out.go:358] Setting ErrFile to fd 2...
I0910 17:47:57.404278   23821 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0910 17:47:57.404468   23821 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-5970/.minikube/bin
I0910 17:47:57.405038   23821 config.go:182] Loaded profile config "functional-256199": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0910 17:47:57.405153   23821 config.go:182] Loaded profile config "functional-256199": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0910 17:47:57.405548   23821 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0910 17:47:57.405607   23821 main.go:141] libmachine: Launching plugin server for driver kvm2
I0910 17:47:57.420765   23821 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44525
I0910 17:47:57.421212   23821 main.go:141] libmachine: () Calling .GetVersion
I0910 17:47:57.421787   23821 main.go:141] libmachine: Using API Version  1
I0910 17:47:57.421814   23821 main.go:141] libmachine: () Calling .SetConfigRaw
I0910 17:47:57.422158   23821 main.go:141] libmachine: () Calling .GetMachineName
I0910 17:47:57.422367   23821 main.go:141] libmachine: (functional-256199) Calling .GetState
I0910 17:47:57.424656   23821 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0910 17:47:57.424699   23821 main.go:141] libmachine: Launching plugin server for driver kvm2
I0910 17:47:57.440311   23821 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35319
I0910 17:47:57.440748   23821 main.go:141] libmachine: () Calling .GetVersion
I0910 17:47:57.441247   23821 main.go:141] libmachine: Using API Version  1
I0910 17:47:57.441273   23821 main.go:141] libmachine: () Calling .SetConfigRaw
I0910 17:47:57.441596   23821 main.go:141] libmachine: () Calling .GetMachineName
I0910 17:47:57.441775   23821 main.go:141] libmachine: (functional-256199) Calling .DriverName
I0910 17:47:57.441998   23821 ssh_runner.go:195] Run: systemctl --version
I0910 17:47:57.442032   23821 main.go:141] libmachine: (functional-256199) Calling .GetSSHHostname
I0910 17:47:57.445008   23821 main.go:141] libmachine: (functional-256199) DBG | domain functional-256199 has defined MAC address 52:54:00:48:30:8f in network mk-functional-256199
I0910 17:47:57.445337   23821 main.go:141] libmachine: (functional-256199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:30:8f", ip: ""} in network mk-functional-256199: {Iface:virbr1 ExpiryTime:2024-09-10 18:44:29 +0000 UTC Type:0 Mac:52:54:00:48:30:8f Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:functional-256199 Clientid:01:52:54:00:48:30:8f}
I0910 17:47:57.445365   23821 main.go:141] libmachine: (functional-256199) DBG | domain functional-256199 has defined IP address 192.168.39.39 and MAC address 52:54:00:48:30:8f in network mk-functional-256199
I0910 17:47:57.445504   23821 main.go:141] libmachine: (functional-256199) Calling .GetSSHPort
I0910 17:47:57.445650   23821 main.go:141] libmachine: (functional-256199) Calling .GetSSHKeyPath
I0910 17:47:57.445763   23821 main.go:141] libmachine: (functional-256199) Calling .GetSSHUsername
I0910 17:47:57.445871   23821 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5970/.minikube/machines/functional-256199/id_rsa Username:docker}
I0910 17:47:57.563222   23821 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0910 17:47:57.628943   23821 main.go:141] libmachine: Making call to close driver server
I0910 17:47:57.628967   23821 main.go:141] libmachine: (functional-256199) Calling .Close
I0910 17:47:57.629266   23821 main.go:141] libmachine: Successfully made call to close driver server
I0910 17:47:57.629282   23821 main.go:141] libmachine: Making call to close connection to plugin binary
I0910 17:47:57.629291   23821 main.go:141] libmachine: Making call to close driver server
I0910 17:47:57.629300   23821 main.go:141] libmachine: (functional-256199) Calling .Close
I0910 17:47:57.629601   23821 main.go:141] libmachine: (functional-256199) DBG | Closing plugin on server side
I0910 17:47:57.629601   23821 main.go:141] libmachine: Successfully made call to close driver server
I0910 17:47:57.629656   23821 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-256199 image ls --format json --alsologtostderr:
[{"id":"b1c95235bbca1bf1113ca7915ab5e82779a2012c2c6fccfaffd68dcc464048f1","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-256199"],"size":"30"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"148000000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c
020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"d6eb2795d81ca4341ad0e53948954ec9b7f9d2f1c755387f48e7a09071777489","repoDigests":[],"repoTags":["localhost/my-image:functional-256199"],"size":"1240000"},{"id":"39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-256199"],"size":"4940000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"736000"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredn
s:v1.11.1"],"size":"59800000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"67400000"},{"id":"604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size":"94200000"},{"id":"045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"88400000"},{"id":"ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"],"size":"91500000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-256199 image ls --format json --alsologtostderr:
I0910 17:47:57.151730   23720 out.go:345] Setting OutFile to fd 1 ...
I0910 17:47:57.151874   23720 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0910 17:47:57.151887   23720 out.go:358] Setting ErrFile to fd 2...
I0910 17:47:57.151893   23720 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0910 17:47:57.152217   23720 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-5970/.minikube/bin
I0910 17:47:57.153020   23720 config.go:182] Loaded profile config "functional-256199": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0910 17:47:57.153173   23720 config.go:182] Loaded profile config "functional-256199": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0910 17:47:57.153726   23720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0910 17:47:57.153800   23720 main.go:141] libmachine: Launching plugin server for driver kvm2
I0910 17:47:57.168986   23720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35323
I0910 17:47:57.169524   23720 main.go:141] libmachine: () Calling .GetVersion
I0910 17:47:57.170100   23720 main.go:141] libmachine: Using API Version  1
I0910 17:47:57.170124   23720 main.go:141] libmachine: () Calling .SetConfigRaw
I0910 17:47:57.170495   23720 main.go:141] libmachine: () Calling .GetMachineName
I0910 17:47:57.170704   23720 main.go:141] libmachine: (functional-256199) Calling .GetState
I0910 17:47:57.172429   23720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0910 17:47:57.172470   23720 main.go:141] libmachine: Launching plugin server for driver kvm2
I0910 17:47:57.187948   23720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41999
I0910 17:47:57.188372   23720 main.go:141] libmachine: () Calling .GetVersion
I0910 17:47:57.188922   23720 main.go:141] libmachine: Using API Version  1
I0910 17:47:57.188949   23720 main.go:141] libmachine: () Calling .SetConfigRaw
I0910 17:47:57.189272   23720 main.go:141] libmachine: () Calling .GetMachineName
I0910 17:47:57.189460   23720 main.go:141] libmachine: (functional-256199) Calling .DriverName
I0910 17:47:57.189657   23720 ssh_runner.go:195] Run: systemctl --version
I0910 17:47:57.189685   23720 main.go:141] libmachine: (functional-256199) Calling .GetSSHHostname
I0910 17:47:57.192169   23720 main.go:141] libmachine: (functional-256199) DBG | domain functional-256199 has defined MAC address 52:54:00:48:30:8f in network mk-functional-256199
I0910 17:47:57.192556   23720 main.go:141] libmachine: (functional-256199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:30:8f", ip: ""} in network mk-functional-256199: {Iface:virbr1 ExpiryTime:2024-09-10 18:44:29 +0000 UTC Type:0 Mac:52:54:00:48:30:8f Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:functional-256199 Clientid:01:52:54:00:48:30:8f}
I0910 17:47:57.192591   23720 main.go:141] libmachine: (functional-256199) DBG | domain functional-256199 has defined IP address 192.168.39.39 and MAC address 52:54:00:48:30:8f in network mk-functional-256199
I0910 17:47:57.192717   23720 main.go:141] libmachine: (functional-256199) Calling .GetSSHPort
I0910 17:47:57.192899   23720 main.go:141] libmachine: (functional-256199) Calling .GetSSHKeyPath
I0910 17:47:57.193034   23720 main.go:141] libmachine: (functional-256199) Calling .GetSSHUsername
I0910 17:47:57.193318   23720 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5970/.minikube/machines/functional-256199/id_rsa Username:docker}
I0910 17:47:57.288546   23720 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0910 17:47:57.340677   23720 main.go:141] libmachine: Making call to close driver server
I0910 17:47:57.340689   23720 main.go:141] libmachine: (functional-256199) Calling .Close
I0910 17:47:57.342566   23720 main.go:141] libmachine: (functional-256199) DBG | Closing plugin on server side
I0910 17:47:57.342585   23720 main.go:141] libmachine: Successfully made call to close driver server
I0910 17:47:57.342600   23720 main.go:141] libmachine: Making call to close connection to plugin binary
I0910 17:47:57.342619   23720 main.go:141] libmachine: Making call to close driver server
I0910 17:47:57.342627   23720 main.go:141] libmachine: (functional-256199) Calling .Close
I0910 17:47:57.342844   23720 main.go:141] libmachine: Successfully made call to close driver server
I0910 17:47:57.342856   23720 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-256199 image ls --format yaml --alsologtostderr:
- id: b1c95235bbca1bf1113ca7915ab5e82779a2012c2c6fccfaffd68dcc464048f1
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-256199
size: "30"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "148000000"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "59800000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-256199
size: "4940000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "67400000"
- id: 045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "88400000"
- id: 604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "94200000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "91500000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "736000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-256199 image ls --format yaml --alsologtostderr:
I0910 17:47:53.496548   23329 out.go:345] Setting OutFile to fd 1 ...
I0910 17:47:53.496668   23329 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0910 17:47:53.496679   23329 out.go:358] Setting ErrFile to fd 2...
I0910 17:47:53.496684   23329 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0910 17:47:53.496882   23329 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-5970/.minikube/bin
I0910 17:47:53.497436   23329 config.go:182] Loaded profile config "functional-256199": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0910 17:47:53.497531   23329 config.go:182] Loaded profile config "functional-256199": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0910 17:47:53.497882   23329 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0910 17:47:53.497923   23329 main.go:141] libmachine: Launching plugin server for driver kvm2
I0910 17:47:53.513582   23329 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44561
I0910 17:47:53.514001   23329 main.go:141] libmachine: () Calling .GetVersion
I0910 17:47:53.514606   23329 main.go:141] libmachine: Using API Version  1
I0910 17:47:53.514631   23329 main.go:141] libmachine: () Calling .SetConfigRaw
I0910 17:47:53.514974   23329 main.go:141] libmachine: () Calling .GetMachineName
I0910 17:47:53.515169   23329 main.go:141] libmachine: (functional-256199) Calling .GetState
I0910 17:47:53.516974   23329 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0910 17:47:53.517013   23329 main.go:141] libmachine: Launching plugin server for driver kvm2
I0910 17:47:53.531672   23329 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46027
I0910 17:47:53.532113   23329 main.go:141] libmachine: () Calling .GetVersion
I0910 17:47:53.532570   23329 main.go:141] libmachine: Using API Version  1
I0910 17:47:53.532589   23329 main.go:141] libmachine: () Calling .SetConfigRaw
I0910 17:47:53.532979   23329 main.go:141] libmachine: () Calling .GetMachineName
I0910 17:47:53.533162   23329 main.go:141] libmachine: (functional-256199) Calling .DriverName
I0910 17:47:53.533406   23329 ssh_runner.go:195] Run: systemctl --version
I0910 17:47:53.533447   23329 main.go:141] libmachine: (functional-256199) Calling .GetSSHHostname
I0910 17:47:53.536254   23329 main.go:141] libmachine: (functional-256199) DBG | domain functional-256199 has defined MAC address 52:54:00:48:30:8f in network mk-functional-256199
I0910 17:47:53.536730   23329 main.go:141] libmachine: (functional-256199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:30:8f", ip: ""} in network mk-functional-256199: {Iface:virbr1 ExpiryTime:2024-09-10 18:44:29 +0000 UTC Type:0 Mac:52:54:00:48:30:8f Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:functional-256199 Clientid:01:52:54:00:48:30:8f}
I0910 17:47:53.536760   23329 main.go:141] libmachine: (functional-256199) DBG | domain functional-256199 has defined IP address 192.168.39.39 and MAC address 52:54:00:48:30:8f in network mk-functional-256199
I0910 17:47:53.536916   23329 main.go:141] libmachine: (functional-256199) Calling .GetSSHPort
I0910 17:47:53.537078   23329 main.go:141] libmachine: (functional-256199) Calling .GetSSHKeyPath
I0910 17:47:53.537240   23329 main.go:141] libmachine: (functional-256199) Calling .GetSSHUsername
I0910 17:47:53.537379   23329 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5970/.minikube/machines/functional-256199/id_rsa Username:docker}
I0910 17:47:53.625497   23329 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0910 17:47:53.675888   23329 main.go:141] libmachine: Making call to close driver server
I0910 17:47:53.675905   23329 main.go:141] libmachine: (functional-256199) Calling .Close
I0910 17:47:53.676201   23329 main.go:141] libmachine: Successfully made call to close driver server
I0910 17:47:53.676224   23329 main.go:141] libmachine: Making call to close connection to plugin binary
I0910 17:47:53.676235   23329 main.go:141] libmachine: Making call to close driver server
I0910 17:47:53.676244   23329 main.go:141] libmachine: (functional-256199) Calling .Close
I0910 17:47:53.676243   23329 main.go:141] libmachine: (functional-256199) DBG | Closing plugin on server side
I0910 17:47:53.676476   23329 main.go:141] libmachine: Successfully made call to close driver server
I0910 17:47:53.676497   23329 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-256199 ssh pgrep buildkitd: exit status 1 (198.597573ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 image build -t localhost/my-image:functional-256199 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-256199 image build -t localhost/my-image:functional-256199 testdata/build --alsologtostderr: (2.969745285s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-256199 image build -t localhost/my-image:functional-256199 testdata/build --alsologtostderr:
I0910 17:47:53.920506   23383 out.go:345] Setting OutFile to fd 1 ...
I0910 17:47:53.920635   23383 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0910 17:47:53.920645   23383 out.go:358] Setting ErrFile to fd 2...
I0910 17:47:53.920650   23383 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0910 17:47:53.920822   23383 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-5970/.minikube/bin
I0910 17:47:53.921371   23383 config.go:182] Loaded profile config "functional-256199": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0910 17:47:53.921880   23383 config.go:182] Loaded profile config "functional-256199": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0910 17:47:53.922286   23383 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0910 17:47:53.922325   23383 main.go:141] libmachine: Launching plugin server for driver kvm2
I0910 17:47:53.937070   23383 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45637
I0910 17:47:53.937563   23383 main.go:141] libmachine: () Calling .GetVersion
I0910 17:47:53.938076   23383 main.go:141] libmachine: Using API Version  1
I0910 17:47:53.938099   23383 main.go:141] libmachine: () Calling .SetConfigRaw
I0910 17:47:53.938479   23383 main.go:141] libmachine: () Calling .GetMachineName
I0910 17:47:53.938713   23383 main.go:141] libmachine: (functional-256199) Calling .GetState
I0910 17:47:53.940480   23383 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0910 17:47:53.940514   23383 main.go:141] libmachine: Launching plugin server for driver kvm2
I0910 17:47:53.955205   23383 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41949
I0910 17:47:53.955632   23383 main.go:141] libmachine: () Calling .GetVersion
I0910 17:47:53.956197   23383 main.go:141] libmachine: Using API Version  1
I0910 17:47:53.956222   23383 main.go:141] libmachine: () Calling .SetConfigRaw
I0910 17:47:53.956598   23383 main.go:141] libmachine: () Calling .GetMachineName
I0910 17:47:53.956830   23383 main.go:141] libmachine: (functional-256199) Calling .DriverName
I0910 17:47:53.957050   23383 ssh_runner.go:195] Run: systemctl --version
I0910 17:47:53.957091   23383 main.go:141] libmachine: (functional-256199) Calling .GetSSHHostname
I0910 17:47:53.959667   23383 main.go:141] libmachine: (functional-256199) DBG | domain functional-256199 has defined MAC address 52:54:00:48:30:8f in network mk-functional-256199
I0910 17:47:53.960046   23383 main.go:141] libmachine: (functional-256199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:30:8f", ip: ""} in network mk-functional-256199: {Iface:virbr1 ExpiryTime:2024-09-10 18:44:29 +0000 UTC Type:0 Mac:52:54:00:48:30:8f Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:functional-256199 Clientid:01:52:54:00:48:30:8f}
I0910 17:47:53.960079   23383 main.go:141] libmachine: (functional-256199) DBG | domain functional-256199 has defined IP address 192.168.39.39 and MAC address 52:54:00:48:30:8f in network mk-functional-256199
I0910 17:47:53.960196   23383 main.go:141] libmachine: (functional-256199) Calling .GetSSHPort
I0910 17:47:53.960428   23383 main.go:141] libmachine: (functional-256199) Calling .GetSSHKeyPath
I0910 17:47:53.960564   23383 main.go:141] libmachine: (functional-256199) Calling .GetSSHUsername
I0910 17:47:53.960764   23383 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5970/.minikube/machines/functional-256199/id_rsa Username:docker}
I0910 17:47:54.048653   23383 build_images.go:161] Building image from path: /tmp/build.356174658.tar
I0910 17:47:54.048720   23383 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0910 17:47:54.058589   23383 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.356174658.tar
I0910 17:47:54.062725   23383 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.356174658.tar: stat -c "%s %y" /var/lib/minikube/build/build.356174658.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.356174658.tar': No such file or directory
I0910 17:47:54.062799   23383 ssh_runner.go:362] scp /tmp/build.356174658.tar --> /var/lib/minikube/build/build.356174658.tar (3072 bytes)
I0910 17:47:54.086226   23383 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.356174658
I0910 17:47:54.097682   23383 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.356174658 -xf /var/lib/minikube/build/build.356174658.tar
I0910 17:47:54.108467   23383 docker.go:360] Building image: /var/lib/minikube/build/build.356174658
I0910 17:47:54.108539   23383 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-256199 /var/lib/minikube/build/build.356174658
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.0s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.4s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.7s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.3s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.1s done
#8 writing image sha256:d6eb2795d81ca4341ad0e53948954ec9b7f9d2f1c755387f48e7a09071777489 done
#8 naming to localhost/my-image:functional-256199 done
#8 DONE 0.1s
I0910 17:47:56.819121   23383 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-256199 /var/lib/minikube/build/build.356174658: (2.710557439s)
I0910 17:47:56.819200   23383 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.356174658
I0910 17:47:56.830728   23383 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.356174658.tar
I0910 17:47:56.845325   23383 build_images.go:217] Built localhost/my-image:functional-256199 from /tmp/build.356174658.tar
I0910 17:47:56.845365   23383 build_images.go:133] succeeded building to: functional-256199
I0910 17:47:56.845373   23383 build_images.go:134] failed building to: 
I0910 17:47:56.845403   23383 main.go:141] libmachine: Making call to close driver server
I0910 17:47:56.845419   23383 main.go:141] libmachine: (functional-256199) Calling .Close
I0910 17:47:56.845845   23383 main.go:141] libmachine: (functional-256199) DBG | Closing plugin on server side
I0910 17:47:56.845861   23383 main.go:141] libmachine: Successfully made call to close driver server
I0910 17:47:56.845875   23383 main.go:141] libmachine: Making call to close connection to plugin binary
I0910 17:47:56.845891   23383 main.go:141] libmachine: Making call to close driver server
I0910 17:47:56.845900   23383 main.go:141] libmachine: (functional-256199) Calling .Close
I0910 17:47:56.846115   23383 main.go:141] libmachine: Successfully made call to close driver server
I0910 17:47:56.846131   23383 main.go:141] libmachine: (functional-256199) DBG | Closing plugin on server side
I0910 17:47:56.846141   23383 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.565308848s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-256199
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.59s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-256199 docker-env) && out/minikube-linux-amd64 status -p functional-256199"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-256199 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 image load --daemon kicbase/echo-server:functional-256199 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "238.775147ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "54.056627ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "229.775054ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "48.59825ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 image load --daemon kicbase/echo-server:functional-256199 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-256199
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 image load --daemon kicbase/echo-server:functional-256199 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 image save kicbase/echo-server:functional-256199 /home/jenkins/workspace/KVM_Linux_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 image rm kicbase/echo-server:functional-256199 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 image load /home/jenkins/workspace/KVM_Linux_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-256199
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 image save --daemon kicbase/echo-server:functional-256199 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-256199
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (21.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-256199 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-256199 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-ngtzj" [2143368c-2285-4744-974c-28664c4ebcd9] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-ngtzj" [2143368c-2285-4744-974c-28664c4ebcd9] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 21.00399715s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (21.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-256199 /tmp/TestFunctionalparallelMountCmdany-port820487510/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1725990468365174029" to /tmp/TestFunctionalparallelMountCmdany-port820487510/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1725990468365174029" to /tmp/TestFunctionalparallelMountCmdany-port820487510/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1725990468365174029" to /tmp/TestFunctionalparallelMountCmdany-port820487510/001/test-1725990468365174029
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-256199 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (228.503455ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 10 17:47 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 10 17:47 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 10 17:47 test-1725990468365174029
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 ssh cat /mount-9p/test-1725990468365174029
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-256199 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [db92d048-c822-4761-abbf-3626e707852a] Pending
helpers_test.go:344: "busybox-mount" [db92d048-c822-4761-abbf-3626e707852a] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [db92d048-c822-4761-abbf-3626e707852a] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [db92d048-c822-4761-abbf-3626e707852a] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.005437795s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-256199 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-256199 /tmp/TestFunctionalparallelMountCmdany-port820487510/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 service list -o json
functional_test.go:1494: Took "508.66946ms" to run "out/minikube-linux-amd64 -p functional-256199 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.39:32445
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.39:32445
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-256199 /tmp/TestFunctionalparallelMountCmdspecific-port1514103700/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-256199 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (195.885476ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-256199 /tmp/TestFunctionalparallelMountCmdspecific-port1514103700/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-256199 ssh "sudo umount -f /mount-9p": exit status 1 (216.253608ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-256199 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-256199 /tmp/TestFunctionalparallelMountCmdspecific-port1514103700/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.55s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-256199 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2752869550/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-256199 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2752869550/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-256199 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2752869550/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-256199 ssh "findmnt -T" /mount1: exit status 1 (286.645843ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-256199 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-256199 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-256199 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2752869550/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-256199 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2752869550/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-256199 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2752869550/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
2024/09/10 17:48:04 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.52s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-256199
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-256199
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-256199
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestGvisorAddon (193.3s)

                                                
                                                
=== RUN   TestGvisorAddon
=== PAUSE TestGvisorAddon

                                                
                                                

                                                
                                                
=== CONT  TestGvisorAddon
gvisor_addon_test.go:52: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-605402 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:52: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-605402 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (1m16.405286195s)
gvisor_addon_test.go:58: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-605402 cache add gcr.io/k8s-minikube/gvisor-addon:2
E0910 18:32:37.634358   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/skaffold-089494/client.crt: no such file or directory" logger="UnhandledError"
gvisor_addon_test.go:58: (dbg) Done: out/minikube-linux-amd64 -p gvisor-605402 cache add gcr.io/k8s-minikube/gvisor-addon:2: (22.571973423s)
gvisor_addon_test.go:63: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-605402 addons enable gvisor
gvisor_addon_test.go:63: (dbg) Done: out/minikube-linux-amd64 -p gvisor-605402 addons enable gvisor: (3.642214477s)
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [9d40db29-79bb-4f7c-9582-d7d58f766ba0] Running
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 6.00445133s
gvisor_addon_test.go:73: (dbg) Run:  kubectl --context gvisor-605402 replace --force -f testdata/nginx-gvisor.yaml
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [18935c37-2004-4a13-93cb-0acd6a811c67] Pending
helpers_test.go:344: "nginx-gvisor" [18935c37-2004-4a13-93cb-0acd6a811c67] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
E0910 18:33:06.276080   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/addons-447248/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "nginx-gvisor" [18935c37-2004-4a13-93cb-0acd6a811c67] Running
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 36.004400185s
gvisor_addon_test.go:83: (dbg) Run:  out/minikube-linux-amd64 stop -p gvisor-605402
gvisor_addon_test.go:83: (dbg) Done: out/minikube-linux-amd64 stop -p gvisor-605402: (2.336062268s)
gvisor_addon_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-605402 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-605402 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (33.985070901s)
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [9d40db29-79bb-4f7c-9582-d7d58f766ba0] Running / Ready:ContainersNotReady (containers with unready status: [gvisor]) / ContainersReady:ContainersNotReady (containers with unready status: [gvisor])
helpers_test.go:344: "gvisor" [9d40db29-79bb-4f7c-9582-d7d58f766ba0] Running
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 6.005392108s
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [18935c37-2004-4a13-93cb-0acd6a811c67] Running / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 5.004641436s
helpers_test.go:175: Cleaning up "gvisor-605402" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p gvisor-605402
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p gvisor-605402: (1.173467859s)
--- PASS: TestGvisorAddon (193.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (212.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-670212 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2 
E0910 17:48:11.406634   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/addons-447248/client.crt: no such file or directory" logger="UnhandledError"
E0910 17:48:16.528091   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/addons-447248/client.crt: no such file or directory" logger="UnhandledError"
E0910 17:48:26.770060   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/addons-447248/client.crt: no such file or directory" logger="UnhandledError"
E0910 17:48:47.251632   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/addons-447248/client.crt: no such file or directory" logger="UnhandledError"
E0910 17:49:28.213166   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/addons-447248/client.crt: no such file or directory" logger="UnhandledError"
E0910 17:50:50.134778   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/addons-447248/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-670212 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2 : (3m31.595347663s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-670212 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (212.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-670212 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-670212 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-670212 -- rollout status deployment/busybox: (3.159613548s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-670212 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-670212 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-670212 -- exec busybox-7dff88458-cnr68 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-670212 -- exec busybox-7dff88458-nnmgx -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-670212 -- exec busybox-7dff88458-xldjd -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-670212 -- exec busybox-7dff88458-cnr68 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-670212 -- exec busybox-7dff88458-nnmgx -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-670212 -- exec busybox-7dff88458-xldjd -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-670212 -- exec busybox-7dff88458-cnr68 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-670212 -- exec busybox-7dff88458-nnmgx -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-670212 -- exec busybox-7dff88458-xldjd -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-670212 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-670212 -- exec busybox-7dff88458-cnr68 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-670212 -- exec busybox-7dff88458-cnr68 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-670212 -- exec busybox-7dff88458-nnmgx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-670212 -- exec busybox-7dff88458-nnmgx -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-670212 -- exec busybox-7dff88458-xldjd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-670212 -- exec busybox-7dff88458-xldjd -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (62.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-670212 -v=7 --alsologtostderr
E0910 17:52:20.470823   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/functional-256199/client.crt: no such file or directory" logger="UnhandledError"
E0910 17:52:20.477290   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/functional-256199/client.crt: no such file or directory" logger="UnhandledError"
E0910 17:52:20.488725   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/functional-256199/client.crt: no such file or directory" logger="UnhandledError"
E0910 17:52:20.510160   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/functional-256199/client.crt: no such file or directory" logger="UnhandledError"
E0910 17:52:20.551569   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/functional-256199/client.crt: no such file or directory" logger="UnhandledError"
E0910 17:52:20.633086   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/functional-256199/client.crt: no such file or directory" logger="UnhandledError"
E0910 17:52:20.794623   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/functional-256199/client.crt: no such file or directory" logger="UnhandledError"
E0910 17:52:21.116094   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/functional-256199/client.crt: no such file or directory" logger="UnhandledError"
E0910 17:52:21.758413   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/functional-256199/client.crt: no such file or directory" logger="UnhandledError"
E0910 17:52:23.040004   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/functional-256199/client.crt: no such file or directory" logger="UnhandledError"
E0910 17:52:25.602005   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/functional-256199/client.crt: no such file or directory" logger="UnhandledError"
E0910 17:52:30.724364   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/functional-256199/client.crt: no such file or directory" logger="UnhandledError"
E0910 17:52:40.965788   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/functional-256199/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-670212 -v=7 --alsologtostderr: (1m1.515194847s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-670212 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (62.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-670212 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-670212 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-670212 cp testdata/cp-test.txt ha-670212:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-670212 ssh -n ha-670212 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-670212 cp ha-670212:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3495481775/001/cp-test_ha-670212.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-670212 ssh -n ha-670212 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-670212 cp ha-670212:/home/docker/cp-test.txt ha-670212-m02:/home/docker/cp-test_ha-670212_ha-670212-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-670212 ssh -n ha-670212 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-670212 ssh -n ha-670212-m02 "sudo cat /home/docker/cp-test_ha-670212_ha-670212-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-670212 cp ha-670212:/home/docker/cp-test.txt ha-670212-m03:/home/docker/cp-test_ha-670212_ha-670212-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-670212 ssh -n ha-670212 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-670212 ssh -n ha-670212-m03 "sudo cat /home/docker/cp-test_ha-670212_ha-670212-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-670212 cp ha-670212:/home/docker/cp-test.txt ha-670212-m04:/home/docker/cp-test_ha-670212_ha-670212-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-670212 ssh -n ha-670212 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-670212 ssh -n ha-670212-m04 "sudo cat /home/docker/cp-test_ha-670212_ha-670212-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-670212 cp testdata/cp-test.txt ha-670212-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-670212 ssh -n ha-670212-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-670212 cp ha-670212-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3495481775/001/cp-test_ha-670212-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-670212 ssh -n ha-670212-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-670212 cp ha-670212-m02:/home/docker/cp-test.txt ha-670212:/home/docker/cp-test_ha-670212-m02_ha-670212.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-670212 ssh -n ha-670212-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-670212 ssh -n ha-670212 "sudo cat /home/docker/cp-test_ha-670212-m02_ha-670212.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-670212 cp ha-670212-m02:/home/docker/cp-test.txt ha-670212-m03:/home/docker/cp-test_ha-670212-m02_ha-670212-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-670212 ssh -n ha-670212-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-670212 ssh -n ha-670212-m03 "sudo cat /home/docker/cp-test_ha-670212-m02_ha-670212-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-670212 cp ha-670212-m02:/home/docker/cp-test.txt ha-670212-m04:/home/docker/cp-test_ha-670212-m02_ha-670212-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-670212 ssh -n ha-670212-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-670212 ssh -n ha-670212-m04 "sudo cat /home/docker/cp-test_ha-670212-m02_ha-670212-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-670212 cp testdata/cp-test.txt ha-670212-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-670212 ssh -n ha-670212-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-670212 cp ha-670212-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3495481775/001/cp-test_ha-670212-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-670212 ssh -n ha-670212-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-670212 cp ha-670212-m03:/home/docker/cp-test.txt ha-670212:/home/docker/cp-test_ha-670212-m03_ha-670212.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-670212 ssh -n ha-670212-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-670212 ssh -n ha-670212 "sudo cat /home/docker/cp-test_ha-670212-m03_ha-670212.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-670212 cp ha-670212-m03:/home/docker/cp-test.txt ha-670212-m02:/home/docker/cp-test_ha-670212-m03_ha-670212-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-670212 ssh -n ha-670212-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-670212 ssh -n ha-670212-m02 "sudo cat /home/docker/cp-test_ha-670212-m03_ha-670212-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-670212 cp ha-670212-m03:/home/docker/cp-test.txt ha-670212-m04:/home/docker/cp-test_ha-670212-m03_ha-670212-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-670212 ssh -n ha-670212-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-670212 ssh -n ha-670212-m04 "sudo cat /home/docker/cp-test_ha-670212-m03_ha-670212-m04.txt"
E0910 17:53:01.447872   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/functional-256199/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-670212 cp testdata/cp-test.txt ha-670212-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-670212 ssh -n ha-670212-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-670212 cp ha-670212-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3495481775/001/cp-test_ha-670212-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-670212 ssh -n ha-670212-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-670212 cp ha-670212-m04:/home/docker/cp-test.txt ha-670212:/home/docker/cp-test_ha-670212-m04_ha-670212.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-670212 ssh -n ha-670212-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-670212 ssh -n ha-670212 "sudo cat /home/docker/cp-test_ha-670212-m04_ha-670212.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-670212 cp ha-670212-m04:/home/docker/cp-test.txt ha-670212-m02:/home/docker/cp-test_ha-670212-m04_ha-670212-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-670212 ssh -n ha-670212-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-670212 ssh -n ha-670212-m02 "sudo cat /home/docker/cp-test_ha-670212-m04_ha-670212-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-670212 cp ha-670212-m04:/home/docker/cp-test.txt ha-670212-m03:/home/docker/cp-test_ha-670212-m04_ha-670212-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-670212 ssh -n ha-670212-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-670212 ssh -n ha-670212-m03 "sudo cat /home/docker/cp-test_ha-670212-m04_ha-670212-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-670212 node stop m02 -v=7 --alsologtostderr
E0910 17:53:06.276161   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/addons-447248/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-670212 node stop m02 -v=7 --alsologtostderr: (12.585381042s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-670212 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-670212 status -v=7 --alsologtostderr: exit status 7 (606.857121ms)

                                                
                                                
-- stdout --
	ha-670212
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-670212-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-670212-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-670212-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 17:53:17.073775   28343 out.go:345] Setting OutFile to fd 1 ...
	I0910 17:53:17.073910   28343 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:53:17.073921   28343 out.go:358] Setting ErrFile to fd 2...
	I0910 17:53:17.073927   28343 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:53:17.074218   28343 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-5970/.minikube/bin
	I0910 17:53:17.074401   28343 out.go:352] Setting JSON to false
	I0910 17:53:17.074428   28343 mustload.go:65] Loading cluster: ha-670212
	I0910 17:53:17.074548   28343 notify.go:220] Checking for updates...
	I0910 17:53:17.075184   28343 config.go:182] Loaded profile config "ha-670212": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 17:53:17.075255   28343 status.go:255] checking status of ha-670212 ...
	I0910 17:53:17.076109   28343 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0910 17:53:17.076148   28343 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:53:17.090665   28343 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42387
	I0910 17:53:17.091064   28343 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:53:17.091510   28343 main.go:141] libmachine: Using API Version  1
	I0910 17:53:17.091528   28343 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:53:17.091918   28343 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:53:17.092091   28343 main.go:141] libmachine: (ha-670212) Calling .GetState
	I0910 17:53:17.093578   28343 status.go:330] ha-670212 host status = "Running" (err=<nil>)
	I0910 17:53:17.093603   28343 host.go:66] Checking if "ha-670212" exists ...
	I0910 17:53:17.093873   28343 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0910 17:53:17.093912   28343 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:53:17.108643   28343 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36433
	I0910 17:53:17.108976   28343 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:53:17.109392   28343 main.go:141] libmachine: Using API Version  1
	I0910 17:53:17.109411   28343 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:53:17.109712   28343 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:53:17.109873   28343 main.go:141] libmachine: (ha-670212) Calling .GetIP
	I0910 17:53:17.112511   28343 main.go:141] libmachine: (ha-670212) DBG | domain ha-670212 has defined MAC address 52:54:00:1a:f4:cd in network mk-ha-670212
	I0910 17:53:17.112958   28343 main.go:141] libmachine: (ha-670212) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f4:cd", ip: ""} in network mk-ha-670212: {Iface:virbr1 ExpiryTime:2024-09-10 18:48:25 +0000 UTC Type:0 Mac:52:54:00:1a:f4:cd Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:ha-670212 Clientid:01:52:54:00:1a:f4:cd}
	I0910 17:53:17.112995   28343 main.go:141] libmachine: (ha-670212) DBG | domain ha-670212 has defined IP address 192.168.39.87 and MAC address 52:54:00:1a:f4:cd in network mk-ha-670212
	I0910 17:53:17.113086   28343 host.go:66] Checking if "ha-670212" exists ...
	I0910 17:53:17.113357   28343 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0910 17:53:17.113389   28343 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:53:17.127632   28343 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41937
	I0910 17:53:17.128002   28343 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:53:17.128458   28343 main.go:141] libmachine: Using API Version  1
	I0910 17:53:17.128477   28343 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:53:17.128767   28343 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:53:17.128964   28343 main.go:141] libmachine: (ha-670212) Calling .DriverName
	I0910 17:53:17.129139   28343 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0910 17:53:17.129172   28343 main.go:141] libmachine: (ha-670212) Calling .GetSSHHostname
	I0910 17:53:17.132480   28343 main.go:141] libmachine: (ha-670212) DBG | domain ha-670212 has defined MAC address 52:54:00:1a:f4:cd in network mk-ha-670212
	I0910 17:53:17.133058   28343 main.go:141] libmachine: (ha-670212) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:f4:cd", ip: ""} in network mk-ha-670212: {Iface:virbr1 ExpiryTime:2024-09-10 18:48:25 +0000 UTC Type:0 Mac:52:54:00:1a:f4:cd Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:ha-670212 Clientid:01:52:54:00:1a:f4:cd}
	I0910 17:53:17.133090   28343 main.go:141] libmachine: (ha-670212) DBG | domain ha-670212 has defined IP address 192.168.39.87 and MAC address 52:54:00:1a:f4:cd in network mk-ha-670212
	I0910 17:53:17.133300   28343 main.go:141] libmachine: (ha-670212) Calling .GetSSHPort
	I0910 17:53:17.133493   28343 main.go:141] libmachine: (ha-670212) Calling .GetSSHKeyPath
	I0910 17:53:17.133665   28343 main.go:141] libmachine: (ha-670212) Calling .GetSSHUsername
	I0910 17:53:17.133837   28343 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5970/.minikube/machines/ha-670212/id_rsa Username:docker}
	I0910 17:53:17.224099   28343 ssh_runner.go:195] Run: systemctl --version
	I0910 17:53:17.236693   28343 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 17:53:17.251311   28343 kubeconfig.go:125] found "ha-670212" server: "https://192.168.39.254:8443"
	I0910 17:53:17.251346   28343 api_server.go:166] Checking apiserver status ...
	I0910 17:53:17.251387   28343 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 17:53:17.266524   28343 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1852/cgroup
	W0910 17:53:17.275684   28343 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1852/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0910 17:53:17.275735   28343 ssh_runner.go:195] Run: ls
	I0910 17:53:17.279679   28343 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0910 17:53:17.285334   28343 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0910 17:53:17.285353   28343 status.go:422] ha-670212 apiserver status = Running (err=<nil>)
	I0910 17:53:17.285361   28343 status.go:257] ha-670212 status: &{Name:ha-670212 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0910 17:53:17.285376   28343 status.go:255] checking status of ha-670212-m02 ...
	I0910 17:53:17.285670   28343 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0910 17:53:17.285709   28343 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:53:17.301911   28343 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37877
	I0910 17:53:17.302384   28343 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:53:17.302898   28343 main.go:141] libmachine: Using API Version  1
	I0910 17:53:17.302917   28343 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:53:17.303242   28343 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:53:17.303426   28343 main.go:141] libmachine: (ha-670212-m02) Calling .GetState
	I0910 17:53:17.304981   28343 status.go:330] ha-670212-m02 host status = "Stopped" (err=<nil>)
	I0910 17:53:17.304993   28343 status.go:343] host is not running, skipping remaining checks
	I0910 17:53:17.304998   28343 status.go:257] ha-670212-m02 status: &{Name:ha-670212-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0910 17:53:17.305020   28343 status.go:255] checking status of ha-670212-m03 ...
	I0910 17:53:17.305418   28343 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0910 17:53:17.305452   28343 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:53:17.319864   28343 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44959
	I0910 17:53:17.320311   28343 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:53:17.320740   28343 main.go:141] libmachine: Using API Version  1
	I0910 17:53:17.320757   28343 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:53:17.321024   28343 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:53:17.321170   28343 main.go:141] libmachine: (ha-670212-m03) Calling .GetState
	I0910 17:53:17.322581   28343 status.go:330] ha-670212-m03 host status = "Running" (err=<nil>)
	I0910 17:53:17.322596   28343 host.go:66] Checking if "ha-670212-m03" exists ...
	I0910 17:53:17.322968   28343 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0910 17:53:17.323015   28343 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:53:17.338258   28343 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42679
	I0910 17:53:17.338690   28343 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:53:17.339099   28343 main.go:141] libmachine: Using API Version  1
	I0910 17:53:17.339117   28343 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:53:17.339450   28343 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:53:17.339637   28343 main.go:141] libmachine: (ha-670212-m03) Calling .GetIP
	I0910 17:53:17.342419   28343 main.go:141] libmachine: (ha-670212-m03) DBG | domain ha-670212-m03 has defined MAC address 52:54:00:db:b3:d3 in network mk-ha-670212
	I0910 17:53:17.342866   28343 main.go:141] libmachine: (ha-670212-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:b3:d3", ip: ""} in network mk-ha-670212: {Iface:virbr1 ExpiryTime:2024-09-10 18:50:34 +0000 UTC Type:0 Mac:52:54:00:db:b3:d3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-670212-m03 Clientid:01:52:54:00:db:b3:d3}
	I0910 17:53:17.342905   28343 main.go:141] libmachine: (ha-670212-m03) DBG | domain ha-670212-m03 has defined IP address 192.168.39.187 and MAC address 52:54:00:db:b3:d3 in network mk-ha-670212
	I0910 17:53:17.343020   28343 host.go:66] Checking if "ha-670212-m03" exists ...
	I0910 17:53:17.343391   28343 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0910 17:53:17.343441   28343 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:53:17.357545   28343 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41879
	I0910 17:53:17.357847   28343 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:53:17.358221   28343 main.go:141] libmachine: Using API Version  1
	I0910 17:53:17.358241   28343 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:53:17.358529   28343 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:53:17.358729   28343 main.go:141] libmachine: (ha-670212-m03) Calling .DriverName
	I0910 17:53:17.358887   28343 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0910 17:53:17.358907   28343 main.go:141] libmachine: (ha-670212-m03) Calling .GetSSHHostname
	I0910 17:53:17.361453   28343 main.go:141] libmachine: (ha-670212-m03) DBG | domain ha-670212-m03 has defined MAC address 52:54:00:db:b3:d3 in network mk-ha-670212
	I0910 17:53:17.361865   28343 main.go:141] libmachine: (ha-670212-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:b3:d3", ip: ""} in network mk-ha-670212: {Iface:virbr1 ExpiryTime:2024-09-10 18:50:34 +0000 UTC Type:0 Mac:52:54:00:db:b3:d3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-670212-m03 Clientid:01:52:54:00:db:b3:d3}
	I0910 17:53:17.361891   28343 main.go:141] libmachine: (ha-670212-m03) DBG | domain ha-670212-m03 has defined IP address 192.168.39.187 and MAC address 52:54:00:db:b3:d3 in network mk-ha-670212
	I0910 17:53:17.362029   28343 main.go:141] libmachine: (ha-670212-m03) Calling .GetSSHPort
	I0910 17:53:17.362186   28343 main.go:141] libmachine: (ha-670212-m03) Calling .GetSSHKeyPath
	I0910 17:53:17.362328   28343 main.go:141] libmachine: (ha-670212-m03) Calling .GetSSHUsername
	I0910 17:53:17.362511   28343 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5970/.minikube/machines/ha-670212-m03/id_rsa Username:docker}
	I0910 17:53:17.441596   28343 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 17:53:17.456405   28343 kubeconfig.go:125] found "ha-670212" server: "https://192.168.39.254:8443"
	I0910 17:53:17.456432   28343 api_server.go:166] Checking apiserver status ...
	I0910 17:53:17.456468   28343 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 17:53:17.469989   28343 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1846/cgroup
	W0910 17:53:17.480627   28343 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1846/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0910 17:53:17.480701   28343 ssh_runner.go:195] Run: ls
	I0910 17:53:17.484813   28343 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0910 17:53:17.489002   28343 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0910 17:53:17.489021   28343 status.go:422] ha-670212-m03 apiserver status = Running (err=<nil>)
	I0910 17:53:17.489030   28343 status.go:257] ha-670212-m03 status: &{Name:ha-670212-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0910 17:53:17.489043   28343 status.go:255] checking status of ha-670212-m04 ...
	I0910 17:53:17.489319   28343 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0910 17:53:17.489351   28343 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:53:17.504380   28343 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36853
	I0910 17:53:17.504766   28343 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:53:17.505215   28343 main.go:141] libmachine: Using API Version  1
	I0910 17:53:17.505233   28343 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:53:17.505582   28343 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:53:17.505783   28343 main.go:141] libmachine: (ha-670212-m04) Calling .GetState
	I0910 17:53:17.507363   28343 status.go:330] ha-670212-m04 host status = "Running" (err=<nil>)
	I0910 17:53:17.507376   28343 host.go:66] Checking if "ha-670212-m04" exists ...
	I0910 17:53:17.507674   28343 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0910 17:53:17.507711   28343 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:53:17.521979   28343 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37169
	I0910 17:53:17.522383   28343 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:53:17.522814   28343 main.go:141] libmachine: Using API Version  1
	I0910 17:53:17.522833   28343 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:53:17.523141   28343 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:53:17.523313   28343 main.go:141] libmachine: (ha-670212-m04) Calling .GetIP
	I0910 17:53:17.526132   28343 main.go:141] libmachine: (ha-670212-m04) DBG | domain ha-670212-m04 has defined MAC address 52:54:00:12:59:71 in network mk-ha-670212
	I0910 17:53:17.526569   28343 main.go:141] libmachine: (ha-670212-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:59:71", ip: ""} in network mk-ha-670212: {Iface:virbr1 ExpiryTime:2024-09-10 18:52:04 +0000 UTC Type:0 Mac:52:54:00:12:59:71 Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:ha-670212-m04 Clientid:01:52:54:00:12:59:71}
	I0910 17:53:17.526603   28343 main.go:141] libmachine: (ha-670212-m04) DBG | domain ha-670212-m04 has defined IP address 192.168.39.98 and MAC address 52:54:00:12:59:71 in network mk-ha-670212
	I0910 17:53:17.526767   28343 host.go:66] Checking if "ha-670212-m04" exists ...
	I0910 17:53:17.527044   28343 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0910 17:53:17.527086   28343 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:53:17.541698   28343 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35917
	I0910 17:53:17.542130   28343 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:53:17.542597   28343 main.go:141] libmachine: Using API Version  1
	I0910 17:53:17.542627   28343 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:53:17.542938   28343 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:53:17.543107   28343 main.go:141] libmachine: (ha-670212-m04) Calling .DriverName
	I0910 17:53:17.543254   28343 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0910 17:53:17.543276   28343 main.go:141] libmachine: (ha-670212-m04) Calling .GetSSHHostname
	I0910 17:53:17.545854   28343 main.go:141] libmachine: (ha-670212-m04) DBG | domain ha-670212-m04 has defined MAC address 52:54:00:12:59:71 in network mk-ha-670212
	I0910 17:53:17.546241   28343 main.go:141] libmachine: (ha-670212-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:59:71", ip: ""} in network mk-ha-670212: {Iface:virbr1 ExpiryTime:2024-09-10 18:52:04 +0000 UTC Type:0 Mac:52:54:00:12:59:71 Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:ha-670212-m04 Clientid:01:52:54:00:12:59:71}
	I0910 17:53:17.546271   28343 main.go:141] libmachine: (ha-670212-m04) DBG | domain ha-670212-m04 has defined IP address 192.168.39.98 and MAC address 52:54:00:12:59:71 in network mk-ha-670212
	I0910 17:53:17.546411   28343 main.go:141] libmachine: (ha-670212-m04) Calling .GetSSHPort
	I0910 17:53:17.546599   28343 main.go:141] libmachine: (ha-670212-m04) Calling .GetSSHKeyPath
	I0910 17:53:17.546759   28343 main.go:141] libmachine: (ha-670212-m04) Calling .GetSSHUsername
	I0910 17:53:17.546986   28343 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5970/.minikube/machines/ha-670212-m04/id_rsa Username:docker}
	I0910 17:53:17.625597   28343 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 17:53:17.639597   28343 status.go:257] ha-670212-m04 status: &{Name:ha-670212-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (42.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-670212 node start m02 -v=7 --alsologtostderr
E0910 17:53:33.977158   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/addons-447248/client.crt: no such file or directory" logger="UnhandledError"
E0910 17:53:42.409291   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/functional-256199/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p ha-670212 node start m02 -v=7 --alsologtostderr: (41.973506204s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-670212 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (42.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (255.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-670212 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-670212 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-linux-amd64 stop -p ha-670212 -v=7 --alsologtostderr: (40.717954561s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-670212 --wait=true -v=7 --alsologtostderr
E0910 17:55:04.331460   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/functional-256199/client.crt: no such file or directory" logger="UnhandledError"
E0910 17:57:20.470780   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/functional-256199/client.crt: no such file or directory" logger="UnhandledError"
E0910 17:57:48.173341   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/functional-256199/client.crt: no such file or directory" logger="UnhandledError"
E0910 17:58:06.275179   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/addons-447248/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-670212 --wait=true -v=7 --alsologtostderr: (3m34.995910619s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-670212
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (255.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (7.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-670212 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-670212 node delete m03 -v=7 --alsologtostderr: (6.366427737s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-670212 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (7.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (38.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-670212 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-amd64 -p ha-670212 stop -v=7 --alsologtostderr: (38.172741798s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-670212 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-670212 status -v=7 --alsologtostderr: exit status 7 (99.671272ms)

                                                
                                                
-- stdout --
	ha-670212
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-670212-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-670212-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 17:59:02.835099   30742 out.go:345] Setting OutFile to fd 1 ...
	I0910 17:59:02.835333   30742 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:59:02.835341   30742 out.go:358] Setting ErrFile to fd 2...
	I0910 17:59:02.835345   30742 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:59:02.835550   30742 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-5970/.minikube/bin
	I0910 17:59:02.835701   30742 out.go:352] Setting JSON to false
	I0910 17:59:02.835729   30742 mustload.go:65] Loading cluster: ha-670212
	I0910 17:59:02.835761   30742 notify.go:220] Checking for updates...
	I0910 17:59:02.836084   30742 config.go:182] Loaded profile config "ha-670212": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 17:59:02.836096   30742 status.go:255] checking status of ha-670212 ...
	I0910 17:59:02.836476   30742 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0910 17:59:02.836534   30742 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:59:02.855298   30742 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38941
	I0910 17:59:02.855748   30742 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:59:02.856238   30742 main.go:141] libmachine: Using API Version  1
	I0910 17:59:02.856271   30742 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:59:02.856618   30742 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:59:02.856825   30742 main.go:141] libmachine: (ha-670212) Calling .GetState
	I0910 17:59:02.858665   30742 status.go:330] ha-670212 host status = "Stopped" (err=<nil>)
	I0910 17:59:02.858680   30742 status.go:343] host is not running, skipping remaining checks
	I0910 17:59:02.858687   30742 status.go:257] ha-670212 status: &{Name:ha-670212 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0910 17:59:02.858710   30742 status.go:255] checking status of ha-670212-m02 ...
	I0910 17:59:02.858997   30742 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0910 17:59:02.859029   30742 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:59:02.873287   30742 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42187
	I0910 17:59:02.873762   30742 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:59:02.874232   30742 main.go:141] libmachine: Using API Version  1
	I0910 17:59:02.874257   30742 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:59:02.874649   30742 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:59:02.874825   30742 main.go:141] libmachine: (ha-670212-m02) Calling .GetState
	I0910 17:59:02.876432   30742 status.go:330] ha-670212-m02 host status = "Stopped" (err=<nil>)
	I0910 17:59:02.876444   30742 status.go:343] host is not running, skipping remaining checks
	I0910 17:59:02.876449   30742 status.go:257] ha-670212-m02 status: &{Name:ha-670212-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0910 17:59:02.876463   30742 status.go:255] checking status of ha-670212-m04 ...
	I0910 17:59:02.876722   30742 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0910 17:59:02.876756   30742 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 17:59:02.891478   30742 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43203
	I0910 17:59:02.891807   30742 main.go:141] libmachine: () Calling .GetVersion
	I0910 17:59:02.892231   30742 main.go:141] libmachine: Using API Version  1
	I0910 17:59:02.892252   30742 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 17:59:02.892541   30742 main.go:141] libmachine: () Calling .GetMachineName
	I0910 17:59:02.892717   30742 main.go:141] libmachine: (ha-670212-m04) Calling .GetState
	I0910 17:59:02.894064   30742 status.go:330] ha-670212-m04 host status = "Stopped" (err=<nil>)
	I0910 17:59:02.894078   30742 status.go:343] host is not running, skipping remaining checks
	I0910 17:59:02.894085   30742 status.go:257] ha-670212-m04 status: &{Name:ha-670212-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (38.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (167.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-670212 --wait=true -v=7 --alsologtostderr --driver=kvm2 
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-670212 --wait=true -v=7 --alsologtostderr --driver=kvm2 : (2m46.521779944s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-670212 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (167.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (83.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-670212 --control-plane -v=7 --alsologtostderr
E0910 18:02:20.470728   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/functional-256199/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:03:06.275815   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/addons-447248/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-670212 --control-plane -v=7 --alsologtostderr: (1m22.699144296s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-670212 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (83.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.52s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (45.47s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-396715 --driver=kvm2 
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-396715 --driver=kvm2 : (45.474335076s)
--- PASS: TestImageBuild/serial/Setup (45.47s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (2.13s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-396715
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-396715: (2.126941128s)
--- PASS: TestImageBuild/serial/NormalBuild (2.13s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.14s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-396715
image_test.go:99: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-396715: (1.139305771s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.14s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (1.11s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-396715
image_test.go:133: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-396715: (1.111595572s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (1.11s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.85s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-396715
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.85s)

                                                
                                    
x
+
TestJSONOutput/start/Command (60.4s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-351390 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 
E0910 18:04:29.338782   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/addons-447248/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-351390 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 : (1m0.399981783s)
--- PASS: TestJSONOutput/start/Command (60.40s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.56s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-351390 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.56s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.5s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-351390 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.50s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.46s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-351390 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-351390 --output=json --user=testUser: (7.462279943s)
--- PASS: TestJSONOutput/stop/Command (7.46s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-129348 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-129348 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (64.129785ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e5904b13-6235-4436-95a5-6e0878e6a274","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-129348] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"59b1405b-36d6-4e42-8545-7921da5e59f1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19598"}}
	{"specversion":"1.0","id":"22a2cd29-5ace-471e-b47c-265822ced056","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"6b71c850-c771-477a-bd06-cd08371c747d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19598-5970/kubeconfig"}}
	{"specversion":"1.0","id":"d7e0fd6a-d556-44cc-b2c9-30320cee3158","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-5970/.minikube"}}
	{"specversion":"1.0","id":"04030d88-cca9-44a6-af71-de47a83a3808","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"9a1978a8-21b7-4444-b424-387beab6b335","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"10139292-9cbe-4a21-8c2d-5b2b11e2a03d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-129348" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-129348
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (100.07s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-168836 --driver=kvm2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-168836 --driver=kvm2 : (48.249193139s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-172351 --driver=kvm2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-172351 --driver=kvm2 : (49.246592239s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-168836
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-172351
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-172351" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-172351
helpers_test.go:175: Cleaning up "first-168836" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-168836
--- PASS: TestMinikubeProfile (100.07s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (27.74s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-004847 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 
E0910 18:07:20.470785   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/functional-256199/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-004847 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 : (26.735772477s)
--- PASS: TestMountStart/serial/StartWithMountFirst (27.74s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-004847 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-004847 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.35s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (30.86s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-022313 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-022313 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 : (29.863545852s)
--- PASS: TestMountStart/serial/StartWithMountSecond (30.86s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-022313 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-022313 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.91s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-004847 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.91s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-022313 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-022313 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.36s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-022313
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-022313: (2.357610112s)
--- PASS: TestMountStart/serial/Stop (2.36s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (27.41s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-022313
E0910 18:08:06.275593   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/addons-447248/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-022313: (26.410183034s)
--- PASS: TestMountStart/serial/RestartStopped (27.41s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-022313 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-022313 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (126.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-044930 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 
E0910 18:08:43.535316   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/functional-256199/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-044930 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 : (2m6.381666504s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-044930 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (126.79s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-044930 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-044930 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-044930 -- rollout status deployment/busybox: (2.871302673s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-044930 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-044930 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-044930 -- exec busybox-7dff88458-b9p5t -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-044930 -- exec busybox-7dff88458-ltk6q -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-044930 -- exec busybox-7dff88458-b9p5t -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-044930 -- exec busybox-7dff88458-ltk6q -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-044930 -- exec busybox-7dff88458-b9p5t -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-044930 -- exec busybox-7dff88458-ltk6q -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.45s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-044930 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-044930 -- exec busybox-7dff88458-b9p5t -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-044930 -- exec busybox-7dff88458-b9p5t -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-044930 -- exec busybox-7dff88458-ltk6q -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-044930 -- exec busybox-7dff88458-ltk6q -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.83s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (59.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-044930 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-044930 -v 3 --alsologtostderr: (58.898318614s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-044930 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (59.45s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-044930 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.20s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-044930 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-044930 cp testdata/cp-test.txt multinode-044930:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-044930 ssh -n multinode-044930 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-044930 cp multinode-044930:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2060757709/001/cp-test_multinode-044930.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-044930 ssh -n multinode-044930 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-044930 cp multinode-044930:/home/docker/cp-test.txt multinode-044930-m02:/home/docker/cp-test_multinode-044930_multinode-044930-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-044930 ssh -n multinode-044930 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-044930 ssh -n multinode-044930-m02 "sudo cat /home/docker/cp-test_multinode-044930_multinode-044930-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-044930 cp multinode-044930:/home/docker/cp-test.txt multinode-044930-m03:/home/docker/cp-test_multinode-044930_multinode-044930-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-044930 ssh -n multinode-044930 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-044930 ssh -n multinode-044930-m03 "sudo cat /home/docker/cp-test_multinode-044930_multinode-044930-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-044930 cp testdata/cp-test.txt multinode-044930-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-044930 ssh -n multinode-044930-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-044930 cp multinode-044930-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2060757709/001/cp-test_multinode-044930-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-044930 ssh -n multinode-044930-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-044930 cp multinode-044930-m02:/home/docker/cp-test.txt multinode-044930:/home/docker/cp-test_multinode-044930-m02_multinode-044930.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-044930 ssh -n multinode-044930-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-044930 ssh -n multinode-044930 "sudo cat /home/docker/cp-test_multinode-044930-m02_multinode-044930.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-044930 cp multinode-044930-m02:/home/docker/cp-test.txt multinode-044930-m03:/home/docker/cp-test_multinode-044930-m02_multinode-044930-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-044930 ssh -n multinode-044930-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-044930 ssh -n multinode-044930-m03 "sudo cat /home/docker/cp-test_multinode-044930-m02_multinode-044930-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-044930 cp testdata/cp-test.txt multinode-044930-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-044930 ssh -n multinode-044930-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-044930 cp multinode-044930-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2060757709/001/cp-test_multinode-044930-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-044930 ssh -n multinode-044930-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-044930 cp multinode-044930-m03:/home/docker/cp-test.txt multinode-044930:/home/docker/cp-test_multinode-044930-m03_multinode-044930.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-044930 ssh -n multinode-044930-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-044930 ssh -n multinode-044930 "sudo cat /home/docker/cp-test_multinode-044930-m03_multinode-044930.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-044930 cp multinode-044930-m03:/home/docker/cp-test.txt multinode-044930-m02:/home/docker/cp-test_multinode-044930-m03_multinode-044930-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-044930 ssh -n multinode-044930-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-044930 ssh -n multinode-044930-m02 "sudo cat /home/docker/cp-test_multinode-044930-m03_multinode-044930-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.05s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-044930 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-044930 node stop m03: (2.485410815s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-044930 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-044930 status: exit status 7 (411.536487ms)

                                                
                                                
-- stdout --
	multinode-044930
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-044930-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-044930-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-044930 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-044930 status --alsologtostderr: exit status 7 (416.669254ms)

                                                
                                                
-- stdout --
	multinode-044930
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-044930-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-044930-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 18:11:52.476460   39052 out.go:345] Setting OutFile to fd 1 ...
	I0910 18:11:52.476717   39052 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:11:52.476728   39052 out.go:358] Setting ErrFile to fd 2...
	I0910 18:11:52.476733   39052 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:11:52.476923   39052 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-5970/.minikube/bin
	I0910 18:11:52.477129   39052 out.go:352] Setting JSON to false
	I0910 18:11:52.477161   39052 mustload.go:65] Loading cluster: multinode-044930
	I0910 18:11:52.477221   39052 notify.go:220] Checking for updates...
	I0910 18:11:52.477798   39052 config.go:182] Loaded profile config "multinode-044930": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 18:11:52.477827   39052 status.go:255] checking status of multinode-044930 ...
	I0910 18:11:52.478265   39052 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0910 18:11:52.478339   39052 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:11:52.495738   39052 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38297
	I0910 18:11:52.496214   39052 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:11:52.496808   39052 main.go:141] libmachine: Using API Version  1
	I0910 18:11:52.496841   39052 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:11:52.497252   39052 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:11:52.497523   39052 main.go:141] libmachine: (multinode-044930) Calling .GetState
	I0910 18:11:52.499256   39052 status.go:330] multinode-044930 host status = "Running" (err=<nil>)
	I0910 18:11:52.499273   39052 host.go:66] Checking if "multinode-044930" exists ...
	I0910 18:11:52.499589   39052 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0910 18:11:52.499634   39052 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:11:52.514921   39052 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44459
	I0910 18:11:52.515341   39052 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:11:52.515782   39052 main.go:141] libmachine: Using API Version  1
	I0910 18:11:52.515803   39052 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:11:52.516076   39052 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:11:52.516265   39052 main.go:141] libmachine: (multinode-044930) Calling .GetIP
	I0910 18:11:52.518890   39052 main.go:141] libmachine: (multinode-044930) DBG | domain multinode-044930 has defined MAC address 52:54:00:93:fd:7b in network mk-multinode-044930
	I0910 18:11:52.519309   39052 main.go:141] libmachine: (multinode-044930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:fd:7b", ip: ""} in network mk-multinode-044930: {Iface:virbr1 ExpiryTime:2024-09-10 19:08:44 +0000 UTC Type:0 Mac:52:54:00:93:fd:7b Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:multinode-044930 Clientid:01:52:54:00:93:fd:7b}
	I0910 18:11:52.519342   39052 main.go:141] libmachine: (multinode-044930) DBG | domain multinode-044930 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:fd:7b in network mk-multinode-044930
	I0910 18:11:52.519482   39052 host.go:66] Checking if "multinode-044930" exists ...
	I0910 18:11:52.519787   39052 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0910 18:11:52.519835   39052 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:11:52.535035   39052 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34155
	I0910 18:11:52.535394   39052 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:11:52.535835   39052 main.go:141] libmachine: Using API Version  1
	I0910 18:11:52.535858   39052 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:11:52.536157   39052 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:11:52.536321   39052 main.go:141] libmachine: (multinode-044930) Calling .DriverName
	I0910 18:11:52.536478   39052 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0910 18:11:52.536511   39052 main.go:141] libmachine: (multinode-044930) Calling .GetSSHHostname
	I0910 18:11:52.539564   39052 main.go:141] libmachine: (multinode-044930) DBG | domain multinode-044930 has defined MAC address 52:54:00:93:fd:7b in network mk-multinode-044930
	I0910 18:11:52.540090   39052 main.go:141] libmachine: (multinode-044930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:fd:7b", ip: ""} in network mk-multinode-044930: {Iface:virbr1 ExpiryTime:2024-09-10 19:08:44 +0000 UTC Type:0 Mac:52:54:00:93:fd:7b Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:multinode-044930 Clientid:01:52:54:00:93:fd:7b}
	I0910 18:11:52.540113   39052 main.go:141] libmachine: (multinode-044930) DBG | domain multinode-044930 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:fd:7b in network mk-multinode-044930
	I0910 18:11:52.540308   39052 main.go:141] libmachine: (multinode-044930) Calling .GetSSHPort
	I0910 18:11:52.540518   39052 main.go:141] libmachine: (multinode-044930) Calling .GetSSHKeyPath
	I0910 18:11:52.540675   39052 main.go:141] libmachine: (multinode-044930) Calling .GetSSHUsername
	I0910 18:11:52.540851   39052 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5970/.minikube/machines/multinode-044930/id_rsa Username:docker}
	I0910 18:11:52.619199   39052 ssh_runner.go:195] Run: systemctl --version
	I0910 18:11:52.626813   39052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 18:11:52.642758   39052 kubeconfig.go:125] found "multinode-044930" server: "https://192.168.39.149:8443"
	I0910 18:11:52.642792   39052 api_server.go:166] Checking apiserver status ...
	I0910 18:11:52.642837   39052 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 18:11:52.657586   39052 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1827/cgroup
	W0910 18:11:52.668336   39052 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1827/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0910 18:11:52.668387   39052 ssh_runner.go:195] Run: ls
	I0910 18:11:52.673098   39052 api_server.go:253] Checking apiserver healthz at https://192.168.39.149:8443/healthz ...
	I0910 18:11:52.677373   39052 api_server.go:279] https://192.168.39.149:8443/healthz returned 200:
	ok
	I0910 18:11:52.677400   39052 status.go:422] multinode-044930 apiserver status = Running (err=<nil>)
	I0910 18:11:52.677412   39052 status.go:257] multinode-044930 status: &{Name:multinode-044930 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0910 18:11:52.677432   39052 status.go:255] checking status of multinode-044930-m02 ...
	I0910 18:11:52.677758   39052 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0910 18:11:52.677802   39052 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:11:52.693173   39052 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33397
	I0910 18:11:52.693602   39052 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:11:52.694030   39052 main.go:141] libmachine: Using API Version  1
	I0910 18:11:52.694055   39052 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:11:52.694323   39052 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:11:52.694569   39052 main.go:141] libmachine: (multinode-044930-m02) Calling .GetState
	I0910 18:11:52.696110   39052 status.go:330] multinode-044930-m02 host status = "Running" (err=<nil>)
	I0910 18:11:52.696124   39052 host.go:66] Checking if "multinode-044930-m02" exists ...
	I0910 18:11:52.696396   39052 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0910 18:11:52.696427   39052 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:11:52.712197   39052 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42327
	I0910 18:11:52.712718   39052 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:11:52.713215   39052 main.go:141] libmachine: Using API Version  1
	I0910 18:11:52.713238   39052 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:11:52.713605   39052 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:11:52.713804   39052 main.go:141] libmachine: (multinode-044930-m02) Calling .GetIP
	I0910 18:11:52.716743   39052 main.go:141] libmachine: (multinode-044930-m02) DBG | domain multinode-044930-m02 has defined MAC address 52:54:00:68:28:60 in network mk-multinode-044930
	I0910 18:11:52.717146   39052 main.go:141] libmachine: (multinode-044930-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:28:60", ip: ""} in network mk-multinode-044930: {Iface:virbr1 ExpiryTime:2024-09-10 19:09:54 +0000 UTC Type:0 Mac:52:54:00:68:28:60 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:multinode-044930-m02 Clientid:01:52:54:00:68:28:60}
	I0910 18:11:52.717182   39052 main.go:141] libmachine: (multinode-044930-m02) DBG | domain multinode-044930-m02 has defined IP address 192.168.39.116 and MAC address 52:54:00:68:28:60 in network mk-multinode-044930
	I0910 18:11:52.717324   39052 host.go:66] Checking if "multinode-044930-m02" exists ...
	I0910 18:11:52.717716   39052 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0910 18:11:52.717764   39052 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:11:52.732966   39052 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40085
	I0910 18:11:52.733441   39052 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:11:52.733926   39052 main.go:141] libmachine: Using API Version  1
	I0910 18:11:52.733951   39052 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:11:52.734280   39052 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:11:52.734475   39052 main.go:141] libmachine: (multinode-044930-m02) Calling .DriverName
	I0910 18:11:52.734681   39052 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0910 18:11:52.734702   39052 main.go:141] libmachine: (multinode-044930-m02) Calling .GetSSHHostname
	I0910 18:11:52.737455   39052 main.go:141] libmachine: (multinode-044930-m02) DBG | domain multinode-044930-m02 has defined MAC address 52:54:00:68:28:60 in network mk-multinode-044930
	I0910 18:11:52.737857   39052 main.go:141] libmachine: (multinode-044930-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:28:60", ip: ""} in network mk-multinode-044930: {Iface:virbr1 ExpiryTime:2024-09-10 19:09:54 +0000 UTC Type:0 Mac:52:54:00:68:28:60 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:multinode-044930-m02 Clientid:01:52:54:00:68:28:60}
	I0910 18:11:52.737890   39052 main.go:141] libmachine: (multinode-044930-m02) DBG | domain multinode-044930-m02 has defined IP address 192.168.39.116 and MAC address 52:54:00:68:28:60 in network mk-multinode-044930
	I0910 18:11:52.738078   39052 main.go:141] libmachine: (multinode-044930-m02) Calling .GetSSHPort
	I0910 18:11:52.738307   39052 main.go:141] libmachine: (multinode-044930-m02) Calling .GetSSHKeyPath
	I0910 18:11:52.738449   39052 main.go:141] libmachine: (multinode-044930-m02) Calling .GetSSHUsername
	I0910 18:11:52.738678   39052 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19598-5970/.minikube/machines/multinode-044930-m02/id_rsa Username:docker}
	I0910 18:11:52.817904   39052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 18:11:52.831185   39052 status.go:257] multinode-044930-m02 status: &{Name:multinode-044930-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0910 18:11:52.831223   39052 status.go:255] checking status of multinode-044930-m03 ...
	I0910 18:11:52.831546   39052 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0910 18:11:52.831596   39052 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:11:52.847162   39052 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32973
	I0910 18:11:52.847647   39052 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:11:52.848118   39052 main.go:141] libmachine: Using API Version  1
	I0910 18:11:52.848139   39052 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:11:52.848426   39052 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:11:52.848652   39052 main.go:141] libmachine: (multinode-044930-m03) Calling .GetState
	I0910 18:11:52.850335   39052 status.go:330] multinode-044930-m03 host status = "Stopped" (err=<nil>)
	I0910 18:11:52.850352   39052 status.go:343] host is not running, skipping remaining checks
	I0910 18:11:52.850360   39052 status.go:257] multinode-044930-m03 status: &{Name:multinode-044930-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.31s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (42.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-044930 node start m03 -v=7 --alsologtostderr
E0910 18:12:20.471143   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/functional-256199/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-044930 node start m03 -v=7 --alsologtostderr: (41.528744063s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-044930 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (42.14s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (175.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-044930
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-044930
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-044930: (28.060972815s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-044930 --wait=true -v=8 --alsologtostderr
E0910 18:13:06.275286   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/addons-447248/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-044930 --wait=true -v=8 --alsologtostderr: (2m26.876036351s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-044930
--- PASS: TestMultiNode/serial/RestartKeepsNodes (175.03s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-044930 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-044930 node delete m03: (1.510591597s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-044930 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.01s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-044930 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-044930 stop: (24.813163815s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-044930 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-044930 status: exit status 7 (82.068668ms)

                                                
                                                
-- stdout --
	multinode-044930
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-044930-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-044930 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-044930 status --alsologtostderr: exit status 7 (83.086767ms)

                                                
                                                
-- stdout --
	multinode-044930
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-044930-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 18:15:56.974612   40775 out.go:345] Setting OutFile to fd 1 ...
	I0910 18:15:56.974701   40775 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:15:56.974705   40775 out.go:358] Setting ErrFile to fd 2...
	I0910 18:15:56.974709   40775 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 18:15:56.974898   40775 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-5970/.minikube/bin
	I0910 18:15:56.975062   40775 out.go:352] Setting JSON to false
	I0910 18:15:56.975090   40775 mustload.go:65] Loading cluster: multinode-044930
	I0910 18:15:56.975198   40775 notify.go:220] Checking for updates...
	I0910 18:15:56.975431   40775 config.go:182] Loaded profile config "multinode-044930": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 18:15:56.975444   40775 status.go:255] checking status of multinode-044930 ...
	I0910 18:15:56.975846   40775 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0910 18:15:56.975901   40775 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:15:56.995054   40775 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44555
	I0910 18:15:56.995481   40775 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:15:56.996062   40775 main.go:141] libmachine: Using API Version  1
	I0910 18:15:56.996083   40775 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:15:56.996503   40775 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:15:56.996727   40775 main.go:141] libmachine: (multinode-044930) Calling .GetState
	I0910 18:15:56.998449   40775 status.go:330] multinode-044930 host status = "Stopped" (err=<nil>)
	I0910 18:15:56.998465   40775 status.go:343] host is not running, skipping remaining checks
	I0910 18:15:56.998481   40775 status.go:257] multinode-044930 status: &{Name:multinode-044930 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0910 18:15:56.998507   40775 status.go:255] checking status of multinode-044930-m02 ...
	I0910 18:15:56.998932   40775 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0910 18:15:56.998973   40775 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0910 18:15:57.013928   40775 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43137
	I0910 18:15:57.014318   40775 main.go:141] libmachine: () Calling .GetVersion
	I0910 18:15:57.014798   40775 main.go:141] libmachine: Using API Version  1
	I0910 18:15:57.014821   40775 main.go:141] libmachine: () Calling .SetConfigRaw
	I0910 18:15:57.015116   40775 main.go:141] libmachine: () Calling .GetMachineName
	I0910 18:15:57.015283   40775 main.go:141] libmachine: (multinode-044930-m02) Calling .GetState
	I0910 18:15:57.016842   40775 status.go:330] multinode-044930-m02 host status = "Stopped" (err=<nil>)
	I0910 18:15:57.016857   40775 status.go:343] host is not running, skipping remaining checks
	I0910 18:15:57.016864   40775 status.go:257] multinode-044930-m02 status: &{Name:multinode-044930-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.98s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (228.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-044930 --wait=true -v=8 --alsologtostderr --driver=kvm2 
E0910 18:17:20.470779   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/functional-256199/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:18:06.275788   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/addons-447248/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-044930 --wait=true -v=8 --alsologtostderr --driver=kvm2 : (3m47.832968385s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-044930 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (228.34s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (48.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-044930
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-044930-m02 --driver=kvm2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-044930-m02 --driver=kvm2 : exit status 14 (58.948937ms)

                                                
                                                
-- stdout --
	* [multinode-044930-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19598
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19598-5970/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-5970/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-044930-m02' is duplicated with machine name 'multinode-044930-m02' in profile 'multinode-044930'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-044930-m03 --driver=kvm2 
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-044930-m03 --driver=kvm2 : (47.457700878s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-044930
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-044930: exit status 80 (202.108848ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-044930 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-044930-m03 already exists in multinode-044930-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-044930-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (48.53s)

                                                
                                    
x
+
TestPreload (147.62s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-171568 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4
E0910 18:21:09.341565   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/addons-447248/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-171568 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4: (1m19.533665559s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-171568 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-171568 image pull gcr.io/k8s-minikube/busybox: (1.498634763s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-171568
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-171568: (12.503895235s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-171568 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 
E0910 18:22:20.470612   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/functional-256199/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-171568 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 : (53.06131666s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-171568 image list
helpers_test.go:175: Cleaning up "test-preload-171568" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-171568
--- PASS: TestPreload (147.62s)

                                                
                                    
x
+
TestScheduledStopUnix (118.15s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-189449 --memory=2048 --driver=kvm2 
E0910 18:23:06.275771   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/addons-447248/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-189449 --memory=2048 --driver=kvm2 : (46.60565285s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-189449 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-189449 -n scheduled-stop-189449
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-189449 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-189449 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-189449 -n scheduled-stop-189449
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-189449
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-189449 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-189449
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-189449: exit status 7 (64.016805ms)

                                                
                                                
-- stdout --
	scheduled-stop-189449
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-189449 -n scheduled-stop-189449
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-189449 -n scheduled-stop-189449: exit status 7 (58.616284ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-189449" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-189449
--- PASS: TestScheduledStopUnix (118.15s)

                                                
                                    
x
+
TestSkaffold (127.63s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe322997367 version
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-089494 --memory=2600 --driver=kvm2 
E0910 18:25:23.539366   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/functional-256199/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-089494 --memory=2600 --driver=kvm2 : (49.806758749s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/KVM_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe322997367 run --minikube-profile skaffold-089494 --kube-context skaffold-089494 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe322997367 run --minikube-profile skaffold-089494 --kube-context skaffold-089494 --status-check=true --port-forward=false --interactive=false: (1m5.043179378s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-5bcfd8dc59-8h6vz" [8e6bc62e-49fb-4115-a399-78bf8c58c127] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.004588507s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-5947c5fb98-pj6sm" [390cf539-246b-4755-b636-298c4fa9cdd5] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.003316622s
helpers_test.go:175: Cleaning up "skaffold-089494" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-089494
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-089494: (1.153093518s)
--- PASS: TestSkaffold (127.63s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (172.47s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.859579339 start -p running-upgrade-106520 --memory=2200 --vm-driver=kvm2 
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.859579339 start -p running-upgrade-106520 --memory=2200 --vm-driver=kvm2 : (1m31.474190846s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-106520 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-106520 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (1m19.428577122s)
helpers_test.go:175: Cleaning up "running-upgrade-106520" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-106520
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-106520: (1.136969182s)
--- PASS: TestRunningBinaryUpgrade (172.47s)

                                                
                                    
x
+
TestKubernetesUpgrade (228.23s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-453214 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-453214 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2 : (1m45.028677913s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-453214
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-453214: (3.858966723s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-453214 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-453214 status --format={{.Host}}: exit status 7 (74.189537ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-453214 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-453214 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2 : (1m25.409233287s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-453214 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-453214 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2 
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-453214 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2 : exit status 106 (87.249146ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-453214] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19598
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19598-5970/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-5970/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-453214
	    minikube start -p kubernetes-upgrade-453214 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4532142 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0, by running:
	    
	    minikube start -p kubernetes-upgrade-453214 --kubernetes-version=v1.31.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-453214 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-453214 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2 : (32.719657602s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-453214" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-453214
--- PASS: TestKubernetesUpgrade (228.23s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.36s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.36s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (216.67s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1117181337 start -p stopped-upgrade-684552 --memory=2200 --vm-driver=kvm2 
E0910 18:27:20.471131   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/functional-256199/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:28:06.275879   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/addons-447248/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1117181337 start -p stopped-upgrade-684552 --memory=2200 --vm-driver=kvm2 : (2m18.374826053s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1117181337 -p stopped-upgrade-684552 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1117181337 -p stopped-upgrade-684552 stop: (12.513516037s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-684552 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-684552 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (1m5.785342074s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (216.67s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.3s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-684552
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-684552: (1.301961217s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.30s)

                                                
                                    
x
+
TestPause/serial/Start (89.2s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-351357 --memory=2048 --install-addons=false --wait=all --driver=kvm2 
E0910 18:32:17.152176   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/skaffold-089494/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:32:20.471518   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/functional-256199/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-351357 --memory=2048 --install-addons=false --wait=all --driver=kvm2 : (1m29.201721925s)
--- PASS: TestPause/serial/Start (89.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-047180 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-047180 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 : exit status 14 (75.085251ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-047180] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19598
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19598-5970/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-5970/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (62.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-047180 --driver=kvm2 
E0910 18:33:18.596744   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/skaffold-089494/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-047180 --driver=kvm2 : (1m1.82750245s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-047180 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (62.16s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (44.71s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-351357 --alsologtostderr -v=1 --driver=kvm2 
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-351357 --alsologtostderr -v=1 --driver=kvm2 : (44.683765551s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (44.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (78.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-962886 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-962886 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 : (1m18.412915604s)
--- PASS: TestNetworkPlugins/group/auto/Start (78.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (27.61s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-047180 --no-kubernetes --driver=kvm2 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-047180 --no-kubernetes --driver=kvm2 : (26.237132556s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-047180 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-047180 status -o json: exit status 2 (250.572001ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-047180","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-047180
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-047180: (1.123957889s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (27.61s)

                                                
                                    
x
+
TestPause/serial/Pause (0.65s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-351357 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.65s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.31s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-351357 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-351357 --output=json --layout=cluster: exit status 2 (307.694402ms)

                                                
                                                
-- stdout --
	{"Name":"pause-351357","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-351357","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.31s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.64s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-351357 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.64s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.75s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-351357 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.75s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.89s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-351357 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-351357 --alsologtostderr -v=5: (1.891498575s)
--- PASS: TestPause/serial/DeletePaused (1.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (80.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-962886 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-962886 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 : (1m20.633950185s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (80.63s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.37s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (122s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-962886 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 
E0910 18:34:40.518453   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/skaffold-089494/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-962886 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 : (2m1.998026717s)
--- PASS: TestNetworkPlugins/group/calico/Start (122.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (82.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-047180 --no-kubernetes --driver=kvm2 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-047180 --no-kubernetes --driver=kvm2 : (1m22.121459758s)
--- PASS: TestNoKubernetes/serial/Start (82.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-962886 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (13.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-962886 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-fk5hj" [8b0bab1a-6707-4a8d-ac5f-bf8d66b03790] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-fk5hj" [8b0bab1a-6707-4a8d-ac5f-bf8d66b03790] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 13.004100114s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (13.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-962886 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-962886 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-962886 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (93.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-962886 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-962886 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 : (1m33.780555642s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (93.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-ccf99" [4fcfd060-2a80-4075-9ae5-23423212c685] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004727487s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-962886 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-962886 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-ljwj6" [2ae1e375-0d35-46ce-a878-8d423da3b419] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-ljwj6" [2ae1e375-0d35-46ce-a878-8d423da3b419] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.00379363s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-047180 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-047180 "sudo systemctl is-active --quiet service kubelet": exit status 1 (233.746754ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-047180
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-047180: (2.323114663s)
--- PASS: TestNoKubernetes/serial/Stop (2.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (43.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-047180 --driver=kvm2 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-047180 --driver=kvm2 : (43.267197069s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (43.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-962886 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-962886 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-962886 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (115.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-962886 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-962886 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 : (1m55.46737s)
--- PASS: TestNetworkPlugins/group/false/Start (115.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-45fxf" [4c00bf41-6453-4eb6-a695-60a1a7eb2709] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005109251s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-962886 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-962886 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-mwq8n" [93babe8d-1808-43dd-a210-3f35e8dc28a6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-mwq8n" [93babe8d-1808-43dd-a210-3f35e8dc28a6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.005836533s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-962886 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-962886 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-047180 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-047180 "sudo systemctl is-active --quiet service kubelet": exit status 1 (228.635017ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-962886 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (84.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-962886 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 
E0910 18:36:56.657542   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/skaffold-089494/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-962886 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 : (1m24.140596107s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (84.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (107.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-962886 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-962886 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 : (1m47.528165623s)
--- PASS: TestNetworkPlugins/group/flannel/Start (107.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-962886 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-962886 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-gxc6j" [699ed933-2bff-47b9-a8a4-61c7321adf3a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-gxc6j" [699ed933-2bff-47b9-a8a4-61c7321adf3a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.005126182s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-962886 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-962886 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-962886 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (96.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-962886 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 
E0910 18:37:49.343589   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/addons-447248/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:37:58.672468   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/gvisor-605402/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:37:58.678979   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/gvisor-605402/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:37:58.690500   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/gvisor-605402/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:37:58.711973   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/gvisor-605402/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:37:58.753457   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/gvisor-605402/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:37:58.834961   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/gvisor-605402/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:37:58.996611   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/gvisor-605402/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:37:59.318290   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/gvisor-605402/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:37:59.959734   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/gvisor-605402/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:38:01.241633   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/gvisor-605402/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:38:03.803601   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/gvisor-605402/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:38:06.275958   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/addons-447248/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:38:08.925028   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/gvisor-605402/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-962886 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 : (1m36.273326917s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (96.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-962886 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (14.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-962886 replace --force -f testdata/netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context enable-default-cni-962886 replace --force -f testdata/netcat-deployment.yaml: (1.979022592s)
E0910 18:38:19.167174   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/gvisor-605402/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-jhg4j" [9bab7ff3-4536-4874-b6e7-de7c44022e29] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-jhg4j" [9bab7ff3-4536-4874-b6e7-de7c44022e29] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.004296513s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (14.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-962886 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (10.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-962886 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-p2jgk" [eceebf05-a3d3-4d64-a26f-afb1eabb03ee] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-p2jgk" [eceebf05-a3d3-4d64-a26f-afb1eabb03ee] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 10.004505184s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (10.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-962886 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-962886 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-962886 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-962886 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-962886 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-962886 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (96.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-962886 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-962886 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 : (1m36.81589958s)
--- PASS: TestNetworkPlugins/group/bridge/Start (96.82s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (202.75s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-308130 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-308130 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0: (3m22.752209964s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (202.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-8922h" [6128f561-7972-4501-af70-7bc4a936b895] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.007528459s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-962886 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-962886 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-vdhwl" [cba0bbad-e58a-492f-9fc8-e92e93bf2bae] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-vdhwl" [cba0bbad-e58a-492f-9fc8-e92e93bf2bae] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.00548149s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-962886 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-962886 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-962886 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-962886 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-962886 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-xltbp" [f917e693-ba09-4e3e-9d58-efddae81b39f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-xltbp" [f917e693-ba09-4e3e-9d58-efddae81b39f] Running
E0910 18:39:20.610805   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/gvisor-605402/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 10.006602798s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-962886 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-962886 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-962886 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (127.62s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-777185 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-777185 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.31.0: (2m7.622941631s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (127.62s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (136.7s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-229715 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.31.0
E0910 18:40:04.904905   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/auto-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:40:04.911340   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/auto-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:40:04.922748   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/auto-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:40:04.944212   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/auto-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:40:04.985619   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/auto-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:40:05.067069   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/auto-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:40:05.228956   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/auto-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:40:05.550692   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/auto-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:40:06.192410   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/auto-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:40:07.474281   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/auto-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:40:10.035859   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/auto-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:40:15.157269   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/auto-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:40:25.399083   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/auto-962886/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-229715 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.31.0: (2m16.70320858s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (136.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-962886 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-962886 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-bcx2p" [bfe081ae-5a1e-4b65-a564-21e729bc5920] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-bcx2p" [bfe081ae-5a1e-4b65-a564-21e729bc5920] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.004818774s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-962886 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-962886 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-962886 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)
E0910 18:47:58.672207   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/gvisor-605402/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (70.9s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-442563 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.31.0
E0910 18:41:00.238198   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/kindnet-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:41:10.480135   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/kindnet-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:41:26.843076   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/auto-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:41:30.961940   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/kindnet-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:41:32.771235   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/calico-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:41:32.777779   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/calico-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:41:32.789280   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/calico-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:41:32.810907   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/calico-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:41:32.852431   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/calico-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:41:32.934030   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/calico-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:41:33.095502   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/calico-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:41:33.417756   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/calico-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:41:34.060070   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/calico-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:41:35.341487   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/calico-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:41:37.902937   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/calico-962886/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-442563 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.31.0: (1m10.898219924s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (70.90s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-777185 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a2cf6ea9-2134-4044-a978-97384007cd28] Pending
helpers_test.go:344: "busybox" [a2cf6ea9-2134-4044-a978-97384007cd28] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [a2cf6ea9-2134-4044-a978-97384007cd28] Running
E0910 18:41:43.025037   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/calico-962886/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.005710394s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-777185 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-777185 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-777185 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.002540608s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-777185 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (13.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-777185 --alsologtostderr -v=3
E0910 18:41:53.267293   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/calico-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:41:56.657429   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/skaffold-089494/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-777185 --alsologtostderr -v=3: (13.351521563s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (13.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-229715 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5929f7c2-68e7-4909-bda7-770130eabbcc] Pending
helpers_test.go:344: "busybox" [5929f7c2-68e7-4909-bda7-770130eabbcc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5929f7c2-68e7-4909-bda7-770130eabbcc] Running
E0910 18:42:03.541088   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/functional-256199/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004813808s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-229715 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-777185 -n no-preload-777185
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-777185 -n no-preload-777185: exit status 7 (75.017018ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-777185 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (299.48s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-777185 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-777185 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.31.0: (4m59.198878913s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-777185 -n no-preload-777185
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (299.48s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-229715 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-229715 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.98s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (13.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-229715 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-229715 --alsologtostderr -v=3: (13.325747877s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (13.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-442563 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9e616ed2-27f5-45a9-b169-38daa7816a6f] Pending
helpers_test.go:344: "busybox" [9e616ed2-27f5-45a9-b169-38daa7816a6f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0910 18:42:09.527777   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/custom-flannel-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:42:09.534256   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/custom-flannel-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:42:09.545726   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/custom-flannel-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:42:09.567194   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/custom-flannel-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:42:09.608855   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/custom-flannel-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:42:09.690408   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/custom-flannel-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:42:09.851878   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/custom-flannel-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:42:10.173729   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/custom-flannel-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:42:10.815145   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/custom-flannel-962886/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [9e616ed2-27f5-45a9-b169-38daa7816a6f] Running
E0910 18:42:11.923722   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/kindnet-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:42:12.096446   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/custom-flannel-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:42:13.749479   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/calico-962886/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.005179158s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-442563 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-308130 create -f testdata/busybox.yaml
E0910 18:42:14.658627   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/custom-flannel-962886/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1e0e8a46-d293-4966-92a8-436fc04d1097] Pending
helpers_test.go:344: "busybox" [1e0e8a46-d293-4966-92a8-436fc04d1097] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [1e0e8a46-d293-4966-92a8-436fc04d1097] Running
E0910 18:42:19.780096   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/custom-flannel-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:42:20.471298   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/functional-256199/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003812327s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-308130 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.48s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-442563 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-442563 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (13.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-442563 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-442563 --alsologtostderr -v=3: (13.338288991s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (13.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-229715 -n embed-certs-229715
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-229715 -n embed-certs-229715: exit status 7 (69.959329ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-229715 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (301.51s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-229715 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-229715 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.31.0: (5m1.196477975s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-229715 -n embed-certs-229715
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (301.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.96s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-308130 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-308130 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.96s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-308130 --alsologtostderr -v=3
E0910 18:42:30.021382   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/custom-flannel-962886/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-308130 --alsologtostderr -v=3: (13.383837651s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-442563 -n default-k8s-diff-port-442563
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-442563 -n default-k8s-diff-port-442563: exit status 7 (79.375337ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-442563 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (329.63s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-442563 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-442563 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.31.0: (5m29.305174724s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-442563 -n default-k8s-diff-port-442563
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (329.63s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-308130 -n old-k8s-version-308130
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-308130 -n old-k8s-version-308130: exit status 7 (77.956715ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-308130 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (429.95s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-308130 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0
E0910 18:42:48.765255   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/auto-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:42:50.502930   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/custom-flannel-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:42:54.711320   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/calico-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:42:58.671539   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/gvisor-605402/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:43:06.275200   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/addons-447248/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:43:19.140362   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/enable-default-cni-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:43:19.146793   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/enable-default-cni-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:43:19.158236   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/enable-default-cni-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:43:19.179770   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/enable-default-cni-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:43:19.221248   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/enable-default-cni-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:43:19.302732   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/enable-default-cni-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:43:19.464833   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/enable-default-cni-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:43:19.786886   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/enable-default-cni-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:43:20.429154   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/enable-default-cni-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:43:20.885382   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/false-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:43:20.891938   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/false-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:43:20.903360   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/false-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:43:20.924813   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/false-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:43:20.966295   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/false-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:43:21.047816   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/false-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:43:21.209367   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/false-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:43:21.530669   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/false-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:43:21.711286   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/enable-default-cni-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:43:22.172736   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/false-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:43:23.454247   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/false-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:43:24.273393   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/enable-default-cni-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:43:26.015627   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/false-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:43:26.374444   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/gvisor-605402/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:43:29.395629   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/enable-default-cni-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:43:31.137985   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/false-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:43:31.464857   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/custom-flannel-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:43:33.845304   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/kindnet-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:43:39.637329   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/enable-default-cni-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:43:41.380322   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/false-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:43:56.154914   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/flannel-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:43:56.161398   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/flannel-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:43:56.172923   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/flannel-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:43:56.194436   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/flannel-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:43:56.235929   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/flannel-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:43:56.317228   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/flannel-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:43:56.478830   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/flannel-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:43:56.800534   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/flannel-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:43:57.442417   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/flannel-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:43:58.724351   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/flannel-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:44:00.118658   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/enable-default-cni-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:44:01.286320   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/flannel-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:44:01.862212   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/false-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:44:06.408108   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/flannel-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:44:13.509251   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/kubenet-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:44:13.515708   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/kubenet-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:44:13.527176   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/kubenet-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:44:13.548657   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/kubenet-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:44:13.590143   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/kubenet-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:44:13.671635   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/kubenet-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:44:13.833282   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/kubenet-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:44:14.154934   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/kubenet-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:44:14.796241   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/kubenet-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:44:16.078257   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/kubenet-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:44:16.633149   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/calico-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:44:16.649594   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/flannel-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:44:18.640150   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/kubenet-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:44:23.762454   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/kubenet-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:44:34.004755   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/kubenet-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:44:37.131243   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/flannel-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:44:41.080597   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/enable-default-cni-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:44:42.823670   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/false-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:44:53.386632   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/custom-flannel-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:44:54.486646   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/kubenet-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:45:04.904788   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/auto-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:45:18.093554   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/flannel-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:45:28.166857   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/bridge-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:45:28.173361   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/bridge-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:45:28.184901   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/bridge-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:45:28.206312   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/bridge-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:45:28.247808   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/bridge-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:45:28.329402   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/bridge-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:45:28.490753   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/bridge-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:45:28.812880   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/bridge-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:45:29.455145   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/bridge-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:45:30.736525   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/bridge-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:45:32.606754   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/auto-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:45:33.298329   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/bridge-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:45:35.448687   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/kubenet-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:45:38.420351   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/bridge-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:45:48.662664   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/bridge-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:45:49.985342   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/kindnet-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:46:03.001975   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/enable-default-cni-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:46:04.745550   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/false-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:46:09.145070   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/bridge-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:46:17.686712   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/kindnet-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:46:32.771410   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/calico-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:46:40.015066   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/flannel-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:46:50.107309   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/bridge-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:46:56.657839   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/skaffold-089494/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:46:57.370080   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/kubenet-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:47:00.474978   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/calico-962886/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-308130 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0: (7m9.706657985s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-308130 -n old-k8s-version-308130
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (429.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-9bcgc" [d541bb93-9865-49f3-b88d-879a8aea5c58] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004060501s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-9bcgc" [d541bb93-9865-49f3-b88d-879a8aea5c58] Running
E0910 18:47:09.527027   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/custom-flannel-962886/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004446215s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-777185 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-777185 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.64s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-777185 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-777185 -n no-preload-777185
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-777185 -n no-preload-777185: exit status 2 (253.982563ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-777185 -n no-preload-777185
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-777185 -n no-preload-777185: exit status 2 (245.773668ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-777185 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-777185 -n no-preload-777185
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-777185 -n no-preload-777185
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.64s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (63.72s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-451143 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.31.0
E0910 18:47:20.470692   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/functional-256199/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-451143 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.31.0: (1m3.72408037s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (63.72s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-9hvnf" [78ed4f7d-042c-4ea0-8787-8cd914a8f7c7] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-695b96c756-9hvnf" [78ed4f7d-042c-4ea0-8787-8cd914a8f7c7] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.004561174s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (12.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-9hvnf" [78ed4f7d-042c-4ea0-8787-8cd914a8f7c7] Running
E0910 18:47:37.228932   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/custom-flannel-962886/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005174436s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-229715 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-229715 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.77s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-229715 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-229715 -n embed-certs-229715
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-229715 -n embed-certs-229715: exit status 2 (284.306275ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-229715 -n embed-certs-229715
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-229715 -n embed-certs-229715: exit status 2 (256.89123ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-229715 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-229715 -n embed-certs-229715
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-229715 -n embed-certs-229715
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.77s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (7.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-xg4j7" [05a17184-5874-41eb-a2cf-7b60617e2f00] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-695b96c756-xg4j7" [05a17184-5874-41eb-a2cf-7b60617e2f00] Running
E0910 18:48:06.275750   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/addons-447248/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 7.005456288s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (7.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-xg4j7" [05a17184-5874-41eb-a2cf-7b60617e2f00] Running
E0910 18:48:12.029695   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/bridge-962886/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004940606s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-442563 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-442563 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.64s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-442563 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-442563 -n default-k8s-diff-port-442563
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-442563 -n default-k8s-diff-port-442563: exit status 2 (250.240621ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-442563 -n default-k8s-diff-port-442563
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-442563 -n default-k8s-diff-port-442563: exit status 2 (256.369825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-442563 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-442563 -n default-k8s-diff-port-442563
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-442563 -n default-k8s-diff-port-442563
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.64s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.91s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-451143 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0910 18:48:19.722330   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/skaffold-089494/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.91s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (13.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-451143 --alsologtostderr -v=3
E0910 18:48:20.885185   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/false-962886/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-451143 --alsologtostderr -v=3: (13.335944253s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (13.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-451143 -n newest-cni-451143
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-451143 -n newest-cni-451143: exit status 7 (63.223466ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-451143 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (37.54s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-451143 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.31.0
E0910 18:48:46.843789   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/enable-default-cni-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:48:48.587219   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/false-962886/client.crt: no such file or directory" logger="UnhandledError"
E0910 18:48:56.154874   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/flannel-962886/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-451143 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.31.0: (37.283257464s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-451143 -n newest-cni-451143
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (37.54s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-451143 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-451143 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-451143 -n newest-cni-451143
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-451143 -n newest-cni-451143: exit status 2 (243.883444ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-451143 -n newest-cni-451143
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-451143 -n newest-cni-451143: exit status 2 (245.450187ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-451143 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-451143 -n newest-cni-451143
E0910 18:49:13.509617   13146 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19598-5970/.minikube/profiles/kubenet-962886/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-451143 -n newest-cni-451143
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-fxp42" [98a8bbe1-d3ce-4be2-ab68-f001c0d66a73] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003859742s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-fxp42" [98a8bbe1-d3ce-4be2-ab68-f001c0d66a73] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004323569s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-308130 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-308130 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-308130 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-308130 -n old-k8s-version-308130
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-308130 -n old-k8s-version-308130: exit status 2 (232.479018ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-308130 -n old-k8s-version-308130
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-308130 -n old-k8s-version-308130: exit status 2 (230.594618ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-308130 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-308130 -n old-k8s-version-308130
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-308130 -n old-k8s-version-308130
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.27s)

                                                
                                    

Test skip (31/341)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-962886 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-962886

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-962886

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-962886

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-962886

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-962886

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-962886

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-962886

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-962886

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-962886

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-962886

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-962886" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-962886"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-962886" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-962886"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-962886" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-962886"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-962886

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-962886" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-962886"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-962886" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-962886"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-962886" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-962886" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-962886" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-962886" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-962886" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-962886" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-962886" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-962886" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-962886" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-962886"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-962886" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-962886"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-962886" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-962886"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-962886" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-962886"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-962886" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-962886"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-962886

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-962886

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-962886" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-962886" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-962886

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-962886

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-962886" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-962886" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-962886" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-962886" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-962886" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-962886" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-962886"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-962886" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-962886"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-962886" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-962886"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-962886" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-962886"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-962886" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-962886"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-962886

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-962886" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-962886"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-962886" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-962886"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-962886" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-962886"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-962886" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-962886"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-962886" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-962886"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-962886" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-962886"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-962886" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-962886"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-962886" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-962886"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-962886" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-962886"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-962886" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-962886"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-962886" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-962886"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-962886" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-962886"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-962886" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-962886"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-962886" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-962886"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-962886" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-962886"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-962886" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-962886"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-962886" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-962886"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-962886" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-962886"

                                                
                                                
----------------------- debugLogs end: cilium-962886 [took: 3.050278813s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-962886" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-962886
--- SKIP: TestNetworkPlugins/group/cilium (3.20s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-219682" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-219682
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
Copied to clipboard