Test Report: KVM_Linux 19651

                    
                      f000a69778791892f7d89fef6358d7150d12a198:2024-09-16:36236
                    
                

Test fail (1/341)

Order failed test Duration
33 TestAddons/parallel/Registry 73.48
x
+
TestAddons/parallel/Registry (73.48s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 3.057399ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-s7jtc" [ff532941-80d3-4c2f-8fee-58e373f194d0] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004463302s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-sv7w9" [4332ab2c-d8b5-4a98-bd5a-54ed98d85a50] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003539444s
addons_test.go:342: (dbg) Run:  kubectl --context addons-855148 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-855148 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-855148 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.079318719s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-855148 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-855148 ip
2024/09/16 10:37:17 [DEBUG] GET http://192.168.39.55:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-855148 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-855148 -n addons-855148
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-855148 logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-799669                                                                     | download-only-799669 | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:23 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-824353 | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC |                     |
	|         | binary-mirror-824353                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:42639                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-824353                                                                     | binary-mirror-824353 | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:23 UTC |
	| addons  | disable dashboard -p                                                                        | addons-855148        | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC |                     |
	|         | addons-855148                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-855148        | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC |                     |
	|         | addons-855148                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-855148 --wait=true                                                                | addons-855148        | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:27 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2  --addons=ingress                                                             |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-855148 addons disable                                                                | addons-855148        | jenkins | v1.34.0 | 16 Sep 24 10:27 UTC | 16 Sep 24 10:28 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-855148        | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|         | -p addons-855148                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-855148        | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|         | addons-855148                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-855148 ssh cat                                                                       | addons-855148        | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|         | /opt/local-path-provisioner/pvc-5340e152-c54e-4082-abd2-db1266cf31fd_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-855148 addons disable                                                                | addons-855148        | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:37 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-855148 addons disable                                                                | addons-855148        | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-855148 addons disable                                                                | addons-855148        | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-855148 addons                                                                        | addons-855148        | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-855148        | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|         | addons-855148                                                                               |                      |         |         |                     |                     |
	| addons  | addons-855148 addons                                                                        | addons-855148        | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-855148 addons                                                                        | addons-855148        | jenkins | v1.34.0 | 16 Sep 24 10:36 UTC | 16 Sep 24 10:36 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-855148 addons disable                                                                | addons-855148        | jenkins | v1.34.0 | 16 Sep 24 10:37 UTC | 16 Sep 24 10:37 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ssh     | addons-855148 ssh curl -s                                                                   | addons-855148        | jenkins | v1.34.0 | 16 Sep 24 10:37 UTC | 16 Sep 24 10:37 UTC |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ip      | addons-855148 ip                                                                            | addons-855148        | jenkins | v1.34.0 | 16 Sep 24 10:37 UTC | 16 Sep 24 10:37 UTC |
	| addons  | addons-855148 addons disable                                                                | addons-855148        | jenkins | v1.34.0 | 16 Sep 24 10:37 UTC | 16 Sep 24 10:37 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-855148        | jenkins | v1.34.0 | 16 Sep 24 10:37 UTC | 16 Sep 24 10:37 UTC |
	|         | -p addons-855148                                                                            |                      |         |         |                     |                     |
	| addons  | addons-855148 addons disable                                                                | addons-855148        | jenkins | v1.34.0 | 16 Sep 24 10:37 UTC | 16 Sep 24 10:37 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| ip      | addons-855148 ip                                                                            | addons-855148        | jenkins | v1.34.0 | 16 Sep 24 10:37 UTC | 16 Sep 24 10:37 UTC |
	| addons  | addons-855148 addons disable                                                                | addons-855148        | jenkins | v1.34.0 | 16 Sep 24 10:37 UTC | 16 Sep 24 10:37 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:23:45
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:23:45.222014   12665 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:23:45.222119   12665 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:23:45.222128   12665 out.go:358] Setting ErrFile to fd 2...
	I0916 10:23:45.222132   12665 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:23:45.222301   12665 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3871/.minikube/bin
	I0916 10:23:45.222900   12665 out.go:352] Setting JSON to false
	I0916 10:23:45.223726   12665 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":374,"bootTime":1726481851,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:23:45.223814   12665 start.go:139] virtualization: kvm guest
	I0916 10:23:45.225800   12665 out.go:177] * [addons-855148] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 10:23:45.227057   12665 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:23:45.227056   12665 notify.go:220] Checking for updates...
	I0916 10:23:45.228310   12665 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:23:45.229563   12665 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3871/kubeconfig
	I0916 10:23:45.230841   12665 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3871/.minikube
	I0916 10:23:45.231973   12665 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:23:45.233005   12665 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:23:45.234243   12665 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:23:45.266444   12665 out.go:177] * Using the kvm2 driver based on user configuration
	I0916 10:23:45.267611   12665 start.go:297] selected driver: kvm2
	I0916 10:23:45.267626   12665 start.go:901] validating driver "kvm2" against <nil>
	I0916 10:23:45.267637   12665 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:23:45.268325   12665 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:23:45.268388   12665 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19651-3871/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0916 10:23:45.283028   12665 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0916 10:23:45.283081   12665 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 10:23:45.283321   12665 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:23:45.283357   12665 cni.go:84] Creating CNI manager for ""
	I0916 10:23:45.283399   12665 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 10:23:45.283421   12665 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0916 10:23:45.283478   12665 start.go:340] cluster config:
	{Name:addons-855148 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-855148 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:d
ocker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:23:45.283604   12665 iso.go:125] acquiring lock: {Name:mk549d8744cb1b2697cd1f4f389577317a4ca0fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:23:45.285145   12665 out.go:177] * Starting "addons-855148" primary control-plane node in "addons-855148" cluster
	I0916 10:23:45.286375   12665 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 10:23:45.286405   12665 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3871/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0916 10:23:45.286415   12665 cache.go:56] Caching tarball of preloaded images
	I0916 10:23:45.286490   12665 preload.go:172] Found /home/jenkins/minikube-integration/19651-3871/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 10:23:45.286501   12665 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0916 10:23:45.286792   12665 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/addons-855148/config.json ...
	I0916 10:23:45.286810   12665 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/addons-855148/config.json: {Name:mk9f08570f5c42cf05fd371411ad7bd4b3be1e1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:45.286934   12665 start.go:360] acquireMachinesLock for addons-855148: {Name:mk9fe4787b475ac57532de058c62adb66a1834b7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:23:45.286977   12665 start.go:364] duration metric: took 30.89µs to acquireMachinesLock for "addons-855148"
	I0916 10:23:45.286994   12665 start.go:93] Provisioning new machine with config: &{Name:addons-855148 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-855148 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 10:23:45.287047   12665 start.go:125] createHost starting for "" (driver="kvm2")
	I0916 10:23:45.288716   12665 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0916 10:23:45.288839   12665 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 10:23:45.288876   12665 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:23:45.303446   12665 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34917
	I0916 10:23:45.303842   12665 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:23:45.304311   12665 main.go:141] libmachine: Using API Version  1
	I0916 10:23:45.304333   12665 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:23:45.304686   12665 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:23:45.304869   12665 main.go:141] libmachine: (addons-855148) Calling .GetMachineName
	I0916 10:23:45.305001   12665 main.go:141] libmachine: (addons-855148) Calling .DriverName
	I0916 10:23:45.305177   12665 start.go:159] libmachine.API.Create for "addons-855148" (driver="kvm2")
	I0916 10:23:45.305201   12665 client.go:168] LocalClient.Create starting
	I0916 10:23:45.305234   12665 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19651-3871/.minikube/certs/ca.pem
	I0916 10:23:45.400403   12665 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19651-3871/.minikube/certs/cert.pem
	I0916 10:23:45.536532   12665 main.go:141] libmachine: Running pre-create checks...
	I0916 10:23:45.536557   12665 main.go:141] libmachine: (addons-855148) Calling .PreCreateCheck
	I0916 10:23:45.537031   12665 main.go:141] libmachine: (addons-855148) Calling .GetConfigRaw
	I0916 10:23:45.537424   12665 main.go:141] libmachine: Creating machine...
	I0916 10:23:45.537437   12665 main.go:141] libmachine: (addons-855148) Calling .Create
	I0916 10:23:45.537569   12665 main.go:141] libmachine: (addons-855148) Creating KVM machine...
	I0916 10:23:45.538948   12665 main.go:141] libmachine: (addons-855148) DBG | found existing default KVM network
	I0916 10:23:45.539839   12665 main.go:141] libmachine: (addons-855148) DBG | I0916 10:23:45.539687   12686 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000121a60}
	I0916 10:23:45.539869   12665 main.go:141] libmachine: (addons-855148) DBG | created network xml: 
	I0916 10:23:45.539879   12665 main.go:141] libmachine: (addons-855148) DBG | <network>
	I0916 10:23:45.539886   12665 main.go:141] libmachine: (addons-855148) DBG |   <name>mk-addons-855148</name>
	I0916 10:23:45.539890   12665 main.go:141] libmachine: (addons-855148) DBG |   <dns enable='no'/>
	I0916 10:23:45.539896   12665 main.go:141] libmachine: (addons-855148) DBG |   
	I0916 10:23:45.539901   12665 main.go:141] libmachine: (addons-855148) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0916 10:23:45.539909   12665 main.go:141] libmachine: (addons-855148) DBG |     <dhcp>
	I0916 10:23:45.539915   12665 main.go:141] libmachine: (addons-855148) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0916 10:23:45.539926   12665 main.go:141] libmachine: (addons-855148) DBG |     </dhcp>
	I0916 10:23:45.539932   12665 main.go:141] libmachine: (addons-855148) DBG |   </ip>
	I0916 10:23:45.539937   12665 main.go:141] libmachine: (addons-855148) DBG |   
	I0916 10:23:45.539943   12665 main.go:141] libmachine: (addons-855148) DBG | </network>
	I0916 10:23:45.539949   12665 main.go:141] libmachine: (addons-855148) DBG | 
	I0916 10:23:45.545297   12665 main.go:141] libmachine: (addons-855148) DBG | trying to create private KVM network mk-addons-855148 192.168.39.0/24...
	I0916 10:23:45.609848   12665 main.go:141] libmachine: (addons-855148) DBG | private KVM network mk-addons-855148 192.168.39.0/24 created
	I0916 10:23:45.609900   12665 main.go:141] libmachine: (addons-855148) DBG | I0916 10:23:45.609798   12686 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19651-3871/.minikube
	I0916 10:23:45.609920   12665 main.go:141] libmachine: (addons-855148) Setting up store path in /home/jenkins/minikube-integration/19651-3871/.minikube/machines/addons-855148 ...
	I0916 10:23:45.609939   12665 main.go:141] libmachine: (addons-855148) Building disk image from file:///home/jenkins/minikube-integration/19651-3871/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso
	I0916 10:23:45.609957   12665 main.go:141] libmachine: (addons-855148) Downloading /home/jenkins/minikube-integration/19651-3871/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19651-3871/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso...
	I0916 10:23:45.852916   12665 main.go:141] libmachine: (addons-855148) DBG | I0916 10:23:45.852789   12686 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19651-3871/.minikube/machines/addons-855148/id_rsa...
	I0916 10:23:45.996858   12665 main.go:141] libmachine: (addons-855148) DBG | I0916 10:23:45.996715   12686 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19651-3871/.minikube/machines/addons-855148/addons-855148.rawdisk...
	I0916 10:23:45.996887   12665 main.go:141] libmachine: (addons-855148) DBG | Writing magic tar header
	I0916 10:23:45.996900   12665 main.go:141] libmachine: (addons-855148) DBG | Writing SSH key tar header
	I0916 10:23:45.996915   12665 main.go:141] libmachine: (addons-855148) DBG | I0916 10:23:45.996828   12686 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19651-3871/.minikube/machines/addons-855148 ...
	I0916 10:23:45.996930   12665 main.go:141] libmachine: (addons-855148) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3871/.minikube/machines/addons-855148
	I0916 10:23:45.996940   12665 main.go:141] libmachine: (addons-855148) Setting executable bit set on /home/jenkins/minikube-integration/19651-3871/.minikube/machines/addons-855148 (perms=drwx------)
	I0916 10:23:45.996967   12665 main.go:141] libmachine: (addons-855148) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3871/.minikube/machines
	I0916 10:23:45.996993   12665 main.go:141] libmachine: (addons-855148) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3871/.minikube
	I0916 10:23:45.997009   12665 main.go:141] libmachine: (addons-855148) Setting executable bit set on /home/jenkins/minikube-integration/19651-3871/.minikube/machines (perms=drwxr-xr-x)
	I0916 10:23:45.997019   12665 main.go:141] libmachine: (addons-855148) Setting executable bit set on /home/jenkins/minikube-integration/19651-3871/.minikube (perms=drwxr-xr-x)
	I0916 10:23:45.997025   12665 main.go:141] libmachine: (addons-855148) Setting executable bit set on /home/jenkins/minikube-integration/19651-3871 (perms=drwxrwxr-x)
	I0916 10:23:45.997033   12665 main.go:141] libmachine: (addons-855148) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0916 10:23:45.997040   12665 main.go:141] libmachine: (addons-855148) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0916 10:23:45.997051   12665 main.go:141] libmachine: (addons-855148) Creating domain...
	I0916 10:23:45.997072   12665 main.go:141] libmachine: (addons-855148) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19651-3871
	I0916 10:23:45.997086   12665 main.go:141] libmachine: (addons-855148) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0916 10:23:45.997092   12665 main.go:141] libmachine: (addons-855148) DBG | Checking permissions on dir: /home/jenkins
	I0916 10:23:45.997097   12665 main.go:141] libmachine: (addons-855148) DBG | Checking permissions on dir: /home
	I0916 10:23:45.997104   12665 main.go:141] libmachine: (addons-855148) DBG | Skipping /home - not owner
	I0916 10:23:45.998073   12665 main.go:141] libmachine: (addons-855148) define libvirt domain using xml: 
	I0916 10:23:45.998101   12665 main.go:141] libmachine: (addons-855148) <domain type='kvm'>
	I0916 10:23:45.998110   12665 main.go:141] libmachine: (addons-855148)   <name>addons-855148</name>
	I0916 10:23:45.998124   12665 main.go:141] libmachine: (addons-855148)   <memory unit='MiB'>4000</memory>
	I0916 10:23:45.998139   12665 main.go:141] libmachine: (addons-855148)   <vcpu>2</vcpu>
	I0916 10:23:45.998152   12665 main.go:141] libmachine: (addons-855148)   <features>
	I0916 10:23:45.998162   12665 main.go:141] libmachine: (addons-855148)     <acpi/>
	I0916 10:23:45.998169   12665 main.go:141] libmachine: (addons-855148)     <apic/>
	I0916 10:23:45.998180   12665 main.go:141] libmachine: (addons-855148)     <pae/>
	I0916 10:23:45.998188   12665 main.go:141] libmachine: (addons-855148)     
	I0916 10:23:45.998195   12665 main.go:141] libmachine: (addons-855148)   </features>
	I0916 10:23:45.998200   12665 main.go:141] libmachine: (addons-855148)   <cpu mode='host-passthrough'>
	I0916 10:23:45.998207   12665 main.go:141] libmachine: (addons-855148)   
	I0916 10:23:45.998217   12665 main.go:141] libmachine: (addons-855148)   </cpu>
	I0916 10:23:45.998235   12665 main.go:141] libmachine: (addons-855148)   <os>
	I0916 10:23:45.998248   12665 main.go:141] libmachine: (addons-855148)     <type>hvm</type>
	I0916 10:23:45.998260   12665 main.go:141] libmachine: (addons-855148)     <boot dev='cdrom'/>
	I0916 10:23:45.998269   12665 main.go:141] libmachine: (addons-855148)     <boot dev='hd'/>
	I0916 10:23:45.998279   12665 main.go:141] libmachine: (addons-855148)     <bootmenu enable='no'/>
	I0916 10:23:45.998286   12665 main.go:141] libmachine: (addons-855148)   </os>
	I0916 10:23:45.998291   12665 main.go:141] libmachine: (addons-855148)   <devices>
	I0916 10:23:45.998299   12665 main.go:141] libmachine: (addons-855148)     <disk type='file' device='cdrom'>
	I0916 10:23:45.998310   12665 main.go:141] libmachine: (addons-855148)       <source file='/home/jenkins/minikube-integration/19651-3871/.minikube/machines/addons-855148/boot2docker.iso'/>
	I0916 10:23:45.998319   12665 main.go:141] libmachine: (addons-855148)       <target dev='hdc' bus='scsi'/>
	I0916 10:23:45.998329   12665 main.go:141] libmachine: (addons-855148)       <readonly/>
	I0916 10:23:45.998337   12665 main.go:141] libmachine: (addons-855148)     </disk>
	I0916 10:23:45.998348   12665 main.go:141] libmachine: (addons-855148)     <disk type='file' device='disk'>
	I0916 10:23:45.998373   12665 main.go:141] libmachine: (addons-855148)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0916 10:23:45.998384   12665 main.go:141] libmachine: (addons-855148)       <source file='/home/jenkins/minikube-integration/19651-3871/.minikube/machines/addons-855148/addons-855148.rawdisk'/>
	I0916 10:23:45.998395   12665 main.go:141] libmachine: (addons-855148)       <target dev='hda' bus='virtio'/>
	I0916 10:23:45.998407   12665 main.go:141] libmachine: (addons-855148)     </disk>
	I0916 10:23:45.998429   12665 main.go:141] libmachine: (addons-855148)     <interface type='network'>
	I0916 10:23:45.998440   12665 main.go:141] libmachine: (addons-855148)       <source network='mk-addons-855148'/>
	I0916 10:23:45.998451   12665 main.go:141] libmachine: (addons-855148)       <model type='virtio'/>
	I0916 10:23:45.998460   12665 main.go:141] libmachine: (addons-855148)     </interface>
	I0916 10:23:45.998468   12665 main.go:141] libmachine: (addons-855148)     <interface type='network'>
	I0916 10:23:45.998481   12665 main.go:141] libmachine: (addons-855148)       <source network='default'/>
	I0916 10:23:45.998493   12665 main.go:141] libmachine: (addons-855148)       <model type='virtio'/>
	I0916 10:23:45.998502   12665 main.go:141] libmachine: (addons-855148)     </interface>
	I0916 10:23:45.998513   12665 main.go:141] libmachine: (addons-855148)     <serial type='pty'>
	I0916 10:23:45.998522   12665 main.go:141] libmachine: (addons-855148)       <target port='0'/>
	I0916 10:23:45.998532   12665 main.go:141] libmachine: (addons-855148)     </serial>
	I0916 10:23:45.998542   12665 main.go:141] libmachine: (addons-855148)     <console type='pty'>
	I0916 10:23:45.998555   12665 main.go:141] libmachine: (addons-855148)       <target type='serial' port='0'/>
	I0916 10:23:45.998577   12665 main.go:141] libmachine: (addons-855148)     </console>
	I0916 10:23:45.998594   12665 main.go:141] libmachine: (addons-855148)     <rng model='virtio'>
	I0916 10:23:45.998608   12665 main.go:141] libmachine: (addons-855148)       <backend model='random'>/dev/random</backend>
	I0916 10:23:45.998618   12665 main.go:141] libmachine: (addons-855148)     </rng>
	I0916 10:23:45.998639   12665 main.go:141] libmachine: (addons-855148)     
	I0916 10:23:45.998648   12665 main.go:141] libmachine: (addons-855148)     
	I0916 10:23:45.998656   12665 main.go:141] libmachine: (addons-855148)   </devices>
	I0916 10:23:45.998670   12665 main.go:141] libmachine: (addons-855148) </domain>
	I0916 10:23:45.998694   12665 main.go:141] libmachine: (addons-855148) 
	I0916 10:23:46.004329   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined MAC address 52:54:00:dc:a8:17 in network default
	I0916 10:23:46.004879   12665 main.go:141] libmachine: (addons-855148) Ensuring networks are active...
	I0916 10:23:46.004899   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:23:46.005510   12665 main.go:141] libmachine: (addons-855148) Ensuring network default is active
	I0916 10:23:46.005741   12665 main.go:141] libmachine: (addons-855148) Ensuring network mk-addons-855148 is active
	I0916 10:23:46.006214   12665 main.go:141] libmachine: (addons-855148) Getting domain xml...
	I0916 10:23:46.006862   12665 main.go:141] libmachine: (addons-855148) Creating domain...
	I0916 10:23:47.379756   12665 main.go:141] libmachine: (addons-855148) Waiting to get IP...
	I0916 10:23:47.380418   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:23:47.380848   12665 main.go:141] libmachine: (addons-855148) DBG | unable to find current IP address of domain addons-855148 in network mk-addons-855148
	I0916 10:23:47.380887   12665 main.go:141] libmachine: (addons-855148) DBG | I0916 10:23:47.380796   12686 retry.go:31] will retry after 265.723318ms: waiting for machine to come up
	I0916 10:23:47.649531   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:23:47.649920   12665 main.go:141] libmachine: (addons-855148) DBG | unable to find current IP address of domain addons-855148 in network mk-addons-855148
	I0916 10:23:47.649948   12665 main.go:141] libmachine: (addons-855148) DBG | I0916 10:23:47.649875   12686 retry.go:31] will retry after 254.267055ms: waiting for machine to come up
	I0916 10:23:47.905231   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:23:47.905584   12665 main.go:141] libmachine: (addons-855148) DBG | unable to find current IP address of domain addons-855148 in network mk-addons-855148
	I0916 10:23:47.905609   12665 main.go:141] libmachine: (addons-855148) DBG | I0916 10:23:47.905535   12686 retry.go:31] will retry after 333.900254ms: waiting for machine to come up
	I0916 10:23:48.240973   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:23:48.241402   12665 main.go:141] libmachine: (addons-855148) DBG | unable to find current IP address of domain addons-855148 in network mk-addons-855148
	I0916 10:23:48.241431   12665 main.go:141] libmachine: (addons-855148) DBG | I0916 10:23:48.241355   12686 retry.go:31] will retry after 579.041579ms: waiting for machine to come up
	I0916 10:23:48.821731   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:23:48.822044   12665 main.go:141] libmachine: (addons-855148) DBG | unable to find current IP address of domain addons-855148 in network mk-addons-855148
	I0916 10:23:48.822080   12665 main.go:141] libmachine: (addons-855148) DBG | I0916 10:23:48.822003   12686 retry.go:31] will retry after 530.451216ms: waiting for machine to come up
	I0916 10:23:49.353641   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:23:49.354036   12665 main.go:141] libmachine: (addons-855148) DBG | unable to find current IP address of domain addons-855148 in network mk-addons-855148
	I0916 10:23:49.354092   12665 main.go:141] libmachine: (addons-855148) DBG | I0916 10:23:49.354009   12686 retry.go:31] will retry after 766.228349ms: waiting for machine to come up
	I0916 10:23:50.121886   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:23:50.122282   12665 main.go:141] libmachine: (addons-855148) DBG | unable to find current IP address of domain addons-855148 in network mk-addons-855148
	I0916 10:23:50.122309   12665 main.go:141] libmachine: (addons-855148) DBG | I0916 10:23:50.122251   12686 retry.go:31] will retry after 937.966739ms: waiting for machine to come up
	I0916 10:23:51.061436   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:23:51.061886   12665 main.go:141] libmachine: (addons-855148) DBG | unable to find current IP address of domain addons-855148 in network mk-addons-855148
	I0916 10:23:51.061908   12665 main.go:141] libmachine: (addons-855148) DBG | I0916 10:23:51.061836   12686 retry.go:31] will retry after 1.333385808s: waiting for machine to come up
	I0916 10:23:52.397040   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:23:52.397493   12665 main.go:141] libmachine: (addons-855148) DBG | unable to find current IP address of domain addons-855148 in network mk-addons-855148
	I0916 10:23:52.397516   12665 main.go:141] libmachine: (addons-855148) DBG | I0916 10:23:52.397478   12686 retry.go:31] will retry after 1.256288698s: waiting for machine to come up
	I0916 10:23:53.655804   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:23:53.656141   12665 main.go:141] libmachine: (addons-855148) DBG | unable to find current IP address of domain addons-855148 in network mk-addons-855148
	I0916 10:23:53.656165   12665 main.go:141] libmachine: (addons-855148) DBG | I0916 10:23:53.656084   12686 retry.go:31] will retry after 1.966569761s: waiting for machine to come up
	I0916 10:23:55.624264   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:23:55.624726   12665 main.go:141] libmachine: (addons-855148) DBG | unable to find current IP address of domain addons-855148 in network mk-addons-855148
	I0916 10:23:55.624756   12665 main.go:141] libmachine: (addons-855148) DBG | I0916 10:23:55.624669   12686 retry.go:31] will retry after 1.74916523s: waiting for machine to come up
	I0916 10:23:57.376179   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:23:57.376631   12665 main.go:141] libmachine: (addons-855148) DBG | unable to find current IP address of domain addons-855148 in network mk-addons-855148
	I0916 10:23:57.376654   12665 main.go:141] libmachine: (addons-855148) DBG | I0916 10:23:57.376562   12686 retry.go:31] will retry after 2.306177582s: waiting for machine to come up
	I0916 10:23:59.686105   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:23:59.686555   12665 main.go:141] libmachine: (addons-855148) DBG | unable to find current IP address of domain addons-855148 in network mk-addons-855148
	I0916 10:23:59.686580   12665 main.go:141] libmachine: (addons-855148) DBG | I0916 10:23:59.686514   12686 retry.go:31] will retry after 3.702190597s: waiting for machine to come up
	I0916 10:24:03.392260   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:24:03.392579   12665 main.go:141] libmachine: (addons-855148) DBG | unable to find current IP address of domain addons-855148 in network mk-addons-855148
	I0916 10:24:03.392600   12665 main.go:141] libmachine: (addons-855148) DBG | I0916 10:24:03.392516   12686 retry.go:31] will retry after 4.887937872s: waiting for machine to come up
	I0916 10:24:08.281522   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:24:08.281808   12665 main.go:141] libmachine: (addons-855148) Found IP for machine: 192.168.39.55
	I0916 10:24:08.281828   12665 main.go:141] libmachine: (addons-855148) Reserving static IP address...
	I0916 10:24:08.281840   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has current primary IP address 192.168.39.55 and MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:24:08.282153   12665 main.go:141] libmachine: (addons-855148) DBG | unable to find host DHCP lease matching {name: "addons-855148", mac: "52:54:00:5b:32:be", ip: "192.168.39.55"} in network mk-addons-855148
	I0916 10:24:08.407642   12665 main.go:141] libmachine: (addons-855148) DBG | Getting to WaitForSSH function...
	I0916 10:24:08.407733   12665 main.go:141] libmachine: (addons-855148) Reserved static IP address: 192.168.39.55
	I0916 10:24:08.407751   12665 main.go:141] libmachine: (addons-855148) Waiting for SSH to be available...
	I0916 10:24:08.410186   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:24:08.410600   12665 main.go:141] libmachine: (addons-855148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:32:be", ip: ""} in network mk-addons-855148: {Iface:virbr1 ExpiryTime:2024-09-16 11:23:59 +0000 UTC Type:0 Mac:52:54:00:5b:32:be Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:minikube Clientid:01:52:54:00:5b:32:be}
	I0916 10:24:08.410630   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined IP address 192.168.39.55 and MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:24:08.410821   12665 main.go:141] libmachine: (addons-855148) DBG | Using SSH client type: external
	I0916 10:24:08.410851   12665 main.go:141] libmachine: (addons-855148) DBG | Using SSH private key: /home/jenkins/minikube-integration/19651-3871/.minikube/machines/addons-855148/id_rsa (-rw-------)
	I0916 10:24:08.410884   12665 main.go:141] libmachine: (addons-855148) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.55 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19651-3871/.minikube/machines/addons-855148/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0916 10:24:08.410902   12665 main.go:141] libmachine: (addons-855148) DBG | About to run SSH command:
	I0916 10:24:08.410916   12665 main.go:141] libmachine: (addons-855148) DBG | exit 0
	I0916 10:24:08.547042   12665 main.go:141] libmachine: (addons-855148) DBG | SSH cmd err, output: <nil>: 
	I0916 10:24:08.547256   12665 main.go:141] libmachine: (addons-855148) KVM machine creation complete!
	I0916 10:24:08.547580   12665 main.go:141] libmachine: (addons-855148) Calling .GetConfigRaw
	I0916 10:24:08.561291   12665 main.go:141] libmachine: (addons-855148) Calling .DriverName
	I0916 10:24:08.561507   12665 main.go:141] libmachine: (addons-855148) Calling .DriverName
	I0916 10:24:08.561668   12665 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0916 10:24:08.561690   12665 main.go:141] libmachine: (addons-855148) Calling .GetState
	I0916 10:24:08.563108   12665 main.go:141] libmachine: Detecting operating system of created instance...
	I0916 10:24:08.563123   12665 main.go:141] libmachine: Waiting for SSH to be available...
	I0916 10:24:08.563130   12665 main.go:141] libmachine: Getting to WaitForSSH function...
	I0916 10:24:08.563138   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHHostname
	I0916 10:24:08.565131   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:24:08.565456   12665 main.go:141] libmachine: (addons-855148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:32:be", ip: ""} in network mk-addons-855148: {Iface:virbr1 ExpiryTime:2024-09-16 11:23:59 +0000 UTC Type:0 Mac:52:54:00:5b:32:be Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-855148 Clientid:01:52:54:00:5b:32:be}
	I0916 10:24:08.565474   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined IP address 192.168.39.55 and MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:24:08.565618   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHPort
	I0916 10:24:08.565799   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHKeyPath
	I0916 10:24:08.565936   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHKeyPath
	I0916 10:24:08.566066   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHUsername
	I0916 10:24:08.566207   12665 main.go:141] libmachine: Using SSH client type: native
	I0916 10:24:08.566421   12665 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I0916 10:24:08.566433   12665 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0916 10:24:08.677942   12665 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:24:08.677966   12665 main.go:141] libmachine: Detecting the provisioner...
	I0916 10:24:08.677976   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHHostname
	I0916 10:24:08.680485   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:24:08.680865   12665 main.go:141] libmachine: (addons-855148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:32:be", ip: ""} in network mk-addons-855148: {Iface:virbr1 ExpiryTime:2024-09-16 11:23:59 +0000 UTC Type:0 Mac:52:54:00:5b:32:be Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-855148 Clientid:01:52:54:00:5b:32:be}
	I0916 10:24:08.680886   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined IP address 192.168.39.55 and MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:24:08.681197   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHPort
	I0916 10:24:08.681366   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHKeyPath
	I0916 10:24:08.681517   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHKeyPath
	I0916 10:24:08.681652   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHUsername
	I0916 10:24:08.681776   12665 main.go:141] libmachine: Using SSH client type: native
	I0916 10:24:08.681961   12665 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I0916 10:24:08.681974   12665 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0916 10:24:08.791081   12665 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0916 10:24:08.791183   12665 main.go:141] libmachine: found compatible host: buildroot
	I0916 10:24:08.791205   12665 main.go:141] libmachine: Provisioning with buildroot...
	I0916 10:24:08.791217   12665 main.go:141] libmachine: (addons-855148) Calling .GetMachineName
	I0916 10:24:08.791453   12665 buildroot.go:166] provisioning hostname "addons-855148"
	I0916 10:24:08.791475   12665 main.go:141] libmachine: (addons-855148) Calling .GetMachineName
	I0916 10:24:08.791635   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHHostname
	I0916 10:24:08.794072   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:24:08.794395   12665 main.go:141] libmachine: (addons-855148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:32:be", ip: ""} in network mk-addons-855148: {Iface:virbr1 ExpiryTime:2024-09-16 11:23:59 +0000 UTC Type:0 Mac:52:54:00:5b:32:be Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-855148 Clientid:01:52:54:00:5b:32:be}
	I0916 10:24:08.794434   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined IP address 192.168.39.55 and MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:24:08.794522   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHPort
	I0916 10:24:08.794698   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHKeyPath
	I0916 10:24:08.794826   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHKeyPath
	I0916 10:24:08.794941   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHUsername
	I0916 10:24:08.795086   12665 main.go:141] libmachine: Using SSH client type: native
	I0916 10:24:08.795269   12665 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I0916 10:24:08.795281   12665 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-855148 && echo "addons-855148" | sudo tee /etc/hostname
	I0916 10:24:08.919679   12665 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-855148
	
	I0916 10:24:08.919716   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHHostname
	I0916 10:24:08.922502   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:24:08.922847   12665 main.go:141] libmachine: (addons-855148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:32:be", ip: ""} in network mk-addons-855148: {Iface:virbr1 ExpiryTime:2024-09-16 11:23:59 +0000 UTC Type:0 Mac:52:54:00:5b:32:be Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-855148 Clientid:01:52:54:00:5b:32:be}
	I0916 10:24:08.922878   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined IP address 192.168.39.55 and MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:24:08.923066   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHPort
	I0916 10:24:08.923214   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHKeyPath
	I0916 10:24:08.923372   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHKeyPath
	I0916 10:24:08.923570   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHUsername
	I0916 10:24:08.923751   12665 main.go:141] libmachine: Using SSH client type: native
	I0916 10:24:08.923951   12665 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I0916 10:24:08.923972   12665 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-855148' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-855148/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-855148' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:24:09.042735   12665 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:24:09.042764   12665 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3871/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3871/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3871/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3871/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3871/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3871/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3871/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3871/.minikube}
	I0916 10:24:09.042783   12665 buildroot.go:174] setting up certificates
	I0916 10:24:09.042792   12665 provision.go:84] configureAuth start
	I0916 10:24:09.042800   12665 main.go:141] libmachine: (addons-855148) Calling .GetMachineName
	I0916 10:24:09.043074   12665 main.go:141] libmachine: (addons-855148) Calling .GetIP
	I0916 10:24:09.045559   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:24:09.045877   12665 main.go:141] libmachine: (addons-855148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:32:be", ip: ""} in network mk-addons-855148: {Iface:virbr1 ExpiryTime:2024-09-16 11:23:59 +0000 UTC Type:0 Mac:52:54:00:5b:32:be Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-855148 Clientid:01:52:54:00:5b:32:be}
	I0916 10:24:09.045904   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined IP address 192.168.39.55 and MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:24:09.046018   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHHostname
	I0916 10:24:09.047986   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:24:09.048325   12665 main.go:141] libmachine: (addons-855148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:32:be", ip: ""} in network mk-addons-855148: {Iface:virbr1 ExpiryTime:2024-09-16 11:23:59 +0000 UTC Type:0 Mac:52:54:00:5b:32:be Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-855148 Clientid:01:52:54:00:5b:32:be}
	I0916 10:24:09.048353   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined IP address 192.168.39.55 and MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:24:09.048434   12665 provision.go:143] copyHostCerts
	I0916 10:24:09.048500   12665 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3871/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3871/.minikube/ca.pem (1078 bytes)
	I0916 10:24:09.048626   12665 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3871/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3871/.minikube/cert.pem (1123 bytes)
	I0916 10:24:09.048715   12665 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3871/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3871/.minikube/key.pem (1679 bytes)
	I0916 10:24:09.048801   12665 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3871/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3871/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3871/.minikube/certs/ca-key.pem org=jenkins.addons-855148 san=[127.0.0.1 192.168.39.55 addons-855148 localhost minikube]
	I0916 10:24:09.342418   12665 provision.go:177] copyRemoteCerts
	I0916 10:24:09.342477   12665 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:24:09.342497   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHHostname
	I0916 10:24:09.345029   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:24:09.345371   12665 main.go:141] libmachine: (addons-855148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:32:be", ip: ""} in network mk-addons-855148: {Iface:virbr1 ExpiryTime:2024-09-16 11:23:59 +0000 UTC Type:0 Mac:52:54:00:5b:32:be Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-855148 Clientid:01:52:54:00:5b:32:be}
	I0916 10:24:09.345403   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined IP address 192.168.39.55 and MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:24:09.345534   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHPort
	I0916 10:24:09.345733   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHKeyPath
	I0916 10:24:09.345841   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHUsername
	I0916 10:24:09.345988   12665 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3871/.minikube/machines/addons-855148/id_rsa Username:docker}
	I0916 10:24:09.432887   12665 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3871/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 10:24:09.453983   12665 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3871/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 10:24:09.475156   12665 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3871/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 10:24:09.495955   12665 provision.go:87] duration metric: took 453.153407ms to configureAuth
	I0916 10:24:09.495978   12665 buildroot.go:189] setting minikube options for container-runtime
	I0916 10:24:09.496150   12665 config.go:182] Loaded profile config "addons-855148": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:24:09.496173   12665 main.go:141] libmachine: (addons-855148) Calling .DriverName
	I0916 10:24:09.496411   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHHostname
	I0916 10:24:09.498779   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:24:09.499105   12665 main.go:141] libmachine: (addons-855148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:32:be", ip: ""} in network mk-addons-855148: {Iface:virbr1 ExpiryTime:2024-09-16 11:23:59 +0000 UTC Type:0 Mac:52:54:00:5b:32:be Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-855148 Clientid:01:52:54:00:5b:32:be}
	I0916 10:24:09.499127   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined IP address 192.168.39.55 and MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:24:09.499228   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHPort
	I0916 10:24:09.499421   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHKeyPath
	I0916 10:24:09.499545   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHKeyPath
	I0916 10:24:09.499685   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHUsername
	I0916 10:24:09.499816   12665 main.go:141] libmachine: Using SSH client type: native
	I0916 10:24:09.499965   12665 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I0916 10:24:09.499975   12665 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0916 10:24:09.611938   12665 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0916 10:24:09.611961   12665 buildroot.go:70] root file system type: tmpfs
	I0916 10:24:09.612055   12665 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0916 10:24:09.612071   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHHostname
	I0916 10:24:09.614627   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:24:09.614947   12665 main.go:141] libmachine: (addons-855148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:32:be", ip: ""} in network mk-addons-855148: {Iface:virbr1 ExpiryTime:2024-09-16 11:23:59 +0000 UTC Type:0 Mac:52:54:00:5b:32:be Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-855148 Clientid:01:52:54:00:5b:32:be}
	I0916 10:24:09.614973   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined IP address 192.168.39.55 and MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:24:09.615192   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHPort
	I0916 10:24:09.615390   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHKeyPath
	I0916 10:24:09.615564   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHKeyPath
	I0916 10:24:09.615678   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHUsername
	I0916 10:24:09.615812   12665 main.go:141] libmachine: Using SSH client type: native
	I0916 10:24:09.615972   12665 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I0916 10:24:09.616031   12665 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0916 10:24:09.741133   12665 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0916 10:24:09.741169   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHHostname
	I0916 10:24:09.743836   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:24:09.744197   12665 main.go:141] libmachine: (addons-855148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:32:be", ip: ""} in network mk-addons-855148: {Iface:virbr1 ExpiryTime:2024-09-16 11:23:59 +0000 UTC Type:0 Mac:52:54:00:5b:32:be Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-855148 Clientid:01:52:54:00:5b:32:be}
	I0916 10:24:09.744224   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined IP address 192.168.39.55 and MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:24:09.744401   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHPort
	I0916 10:24:09.744596   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHKeyPath
	I0916 10:24:09.744742   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHKeyPath
	I0916 10:24:09.744858   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHUsername
	I0916 10:24:09.744992   12665 main.go:141] libmachine: Using SSH client type: native
	I0916 10:24:09.745169   12665 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I0916 10:24:09.745206   12665 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0916 10:24:11.480451   12665 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0916 10:24:11.480492   12665 main.go:141] libmachine: Checking connection to Docker...
	I0916 10:24:11.480502   12665 main.go:141] libmachine: (addons-855148) Calling .GetURL
	I0916 10:24:11.481679   12665 main.go:141] libmachine: (addons-855148) DBG | Using libvirt version 6000000
	I0916 10:24:11.483742   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:24:11.484114   12665 main.go:141] libmachine: (addons-855148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:32:be", ip: ""} in network mk-addons-855148: {Iface:virbr1 ExpiryTime:2024-09-16 11:23:59 +0000 UTC Type:0 Mac:52:54:00:5b:32:be Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-855148 Clientid:01:52:54:00:5b:32:be}
	I0916 10:24:11.484142   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined IP address 192.168.39.55 and MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:24:11.484299   12665 main.go:141] libmachine: Docker is up and running!
	I0916 10:24:11.484324   12665 main.go:141] libmachine: Reticulating splines...
	I0916 10:24:11.484333   12665 client.go:171] duration metric: took 26.179122338s to LocalClient.Create
	I0916 10:24:11.484362   12665 start.go:167] duration metric: took 26.179184724s to libmachine.API.Create "addons-855148"
	I0916 10:24:11.484389   12665 start.go:293] postStartSetup for "addons-855148" (driver="kvm2")
	I0916 10:24:11.484405   12665 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:24:11.484428   12665 main.go:141] libmachine: (addons-855148) Calling .DriverName
	I0916 10:24:11.484646   12665 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:24:11.484672   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHHostname
	I0916 10:24:11.486747   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:24:11.487071   12665 main.go:141] libmachine: (addons-855148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:32:be", ip: ""} in network mk-addons-855148: {Iface:virbr1 ExpiryTime:2024-09-16 11:23:59 +0000 UTC Type:0 Mac:52:54:00:5b:32:be Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-855148 Clientid:01:52:54:00:5b:32:be}
	I0916 10:24:11.487098   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined IP address 192.168.39.55 and MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:24:11.487211   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHPort
	I0916 10:24:11.487384   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHKeyPath
	I0916 10:24:11.487526   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHUsername
	I0916 10:24:11.487645   12665 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3871/.minikube/machines/addons-855148/id_rsa Username:docker}
	I0916 10:24:11.572944   12665 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:24:11.576542   12665 info.go:137] Remote host: Buildroot 2023.02.9
	I0916 10:24:11.576567   12665 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3871/.minikube/addons for local assets ...
	I0916 10:24:11.576655   12665 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3871/.minikube/files for local assets ...
	I0916 10:24:11.576681   12665 start.go:296] duration metric: took 92.2829ms for postStartSetup
	I0916 10:24:11.576717   12665 main.go:141] libmachine: (addons-855148) Calling .GetConfigRaw
	I0916 10:24:11.577292   12665 main.go:141] libmachine: (addons-855148) Calling .GetIP
	I0916 10:24:11.579628   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:24:11.579882   12665 main.go:141] libmachine: (addons-855148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:32:be", ip: ""} in network mk-addons-855148: {Iface:virbr1 ExpiryTime:2024-09-16 11:23:59 +0000 UTC Type:0 Mac:52:54:00:5b:32:be Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-855148 Clientid:01:52:54:00:5b:32:be}
	I0916 10:24:11.579902   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined IP address 192.168.39.55 and MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:24:11.580120   12665 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/addons-855148/config.json ...
	I0916 10:24:11.580280   12665 start.go:128] duration metric: took 26.293224808s to createHost
	I0916 10:24:11.580301   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHHostname
	I0916 10:24:11.582468   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:24:11.582770   12665 main.go:141] libmachine: (addons-855148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:32:be", ip: ""} in network mk-addons-855148: {Iface:virbr1 ExpiryTime:2024-09-16 11:23:59 +0000 UTC Type:0 Mac:52:54:00:5b:32:be Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-855148 Clientid:01:52:54:00:5b:32:be}
	I0916 10:24:11.582789   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined IP address 192.168.39.55 and MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:24:11.582914   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHPort
	I0916 10:24:11.583101   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHKeyPath
	I0916 10:24:11.583260   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHKeyPath
	I0916 10:24:11.583399   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHUsername
	I0916 10:24:11.583554   12665 main.go:141] libmachine: Using SSH client type: native
	I0916 10:24:11.583726   12665 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I0916 10:24:11.583738   12665 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 10:24:11.695766   12665 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726482251.675599485
	
	I0916 10:24:11.695791   12665 fix.go:216] guest clock: 1726482251.675599485
	I0916 10:24:11.695798   12665 fix.go:229] Guest: 2024-09-16 10:24:11.675599485 +0000 UTC Remote: 2024-09-16 10:24:11.580290945 +0000 UTC m=+26.391214346 (delta=95.30854ms)
	I0916 10:24:11.695832   12665 fix.go:200] guest clock delta is within tolerance: 95.30854ms
	I0916 10:24:11.695842   12665 start.go:83] releasing machines lock for "addons-855148", held for 26.408854797s
	I0916 10:24:11.695864   12665 main.go:141] libmachine: (addons-855148) Calling .DriverName
	I0916 10:24:11.696110   12665 main.go:141] libmachine: (addons-855148) Calling .GetIP
	I0916 10:24:11.698534   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:24:11.698900   12665 main.go:141] libmachine: (addons-855148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:32:be", ip: ""} in network mk-addons-855148: {Iface:virbr1 ExpiryTime:2024-09-16 11:23:59 +0000 UTC Type:0 Mac:52:54:00:5b:32:be Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-855148 Clientid:01:52:54:00:5b:32:be}
	I0916 10:24:11.698930   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined IP address 192.168.39.55 and MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:24:11.699052   12665 main.go:141] libmachine: (addons-855148) Calling .DriverName
	I0916 10:24:11.699604   12665 main.go:141] libmachine: (addons-855148) Calling .DriverName
	I0916 10:24:11.699791   12665 main.go:141] libmachine: (addons-855148) Calling .DriverName
	I0916 10:24:11.699870   12665 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:24:11.699934   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHHostname
	I0916 10:24:11.699992   12665 ssh_runner.go:195] Run: cat /version.json
	I0916 10:24:11.700014   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHHostname
	I0916 10:24:11.702430   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:24:11.702748   12665 main.go:141] libmachine: (addons-855148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:32:be", ip: ""} in network mk-addons-855148: {Iface:virbr1 ExpiryTime:2024-09-16 11:23:59 +0000 UTC Type:0 Mac:52:54:00:5b:32:be Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-855148 Clientid:01:52:54:00:5b:32:be}
	I0916 10:24:11.702772   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:24:11.702792   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined IP address 192.168.39.55 and MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:24:11.702919   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHPort
	I0916 10:24:11.703062   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHKeyPath
	I0916 10:24:11.703175   12665 main.go:141] libmachine: (addons-855148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:32:be", ip: ""} in network mk-addons-855148: {Iface:virbr1 ExpiryTime:2024-09-16 11:23:59 +0000 UTC Type:0 Mac:52:54:00:5b:32:be Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-855148 Clientid:01:52:54:00:5b:32:be}
	I0916 10:24:11.703189   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHUsername
	I0916 10:24:11.703198   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined IP address 192.168.39.55 and MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:24:11.703380   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHPort
	I0916 10:24:11.703425   12665 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3871/.minikube/machines/addons-855148/id_rsa Username:docker}
	I0916 10:24:11.703511   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHKeyPath
	I0916 10:24:11.703649   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHUsername
	I0916 10:24:11.703802   12665 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3871/.minikube/machines/addons-855148/id_rsa Username:docker}
	I0916 10:24:11.807344   12665 ssh_runner.go:195] Run: systemctl --version
	I0916 10:24:11.812847   12665 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0916 10:24:11.817890   12665 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 10:24:11.817951   12665 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:24:11.832781   12665 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0916 10:24:11.832802   12665 start.go:495] detecting cgroup driver to use...
	I0916 10:24:11.832899   12665 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:24:11.850881   12665 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0916 10:24:11.861539   12665 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 10:24:11.872368   12665 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 10:24:11.872425   12665 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 10:24:11.883373   12665 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:24:11.894383   12665 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 10:24:11.905023   12665 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:24:11.915797   12665 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:24:11.926632   12665 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 10:24:11.937228   12665 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 10:24:11.948086   12665 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 10:24:11.958919   12665 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:24:11.967257   12665 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:24:11.975576   12665 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:24:12.085681   12665 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 10:24:12.109881   12665 start.go:495] detecting cgroup driver to use...
	I0916 10:24:12.109969   12665 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0916 10:24:12.130374   12665 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 10:24:12.144689   12665 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 10:24:12.160796   12665 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 10:24:12.173119   12665 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 10:24:12.187382   12665 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 10:24:12.218122   12665 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 10:24:12.231560   12665 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:24:12.248072   12665 ssh_runner.go:195] Run: which cri-dockerd
	I0916 10:24:12.251258   12665 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0916 10:24:12.259635   12665 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0916 10:24:12.274344   12665 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0916 10:24:12.388895   12665 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0916 10:24:12.508049   12665 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0916 10:24:12.508184   12665 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0916 10:24:12.523923   12665 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:24:12.634336   12665 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0916 10:24:14.949617   12665 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.315243325s)
	I0916 10:24:14.949717   12665 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0916 10:24:14.962588   12665 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 10:24:14.975467   12665 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0916 10:24:15.086733   12665 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0916 10:24:15.202636   12665 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:24:15.312577   12665 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0916 10:24:15.328283   12665 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 10:24:15.341252   12665 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:24:15.459419   12665 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0916 10:24:15.534808   12665 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0916 10:24:15.534891   12665 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0916 10:24:15.540129   12665 start.go:563] Will wait 60s for crictl version
	I0916 10:24:15.540189   12665 ssh_runner.go:195] Run: which crictl
	I0916 10:24:15.543742   12665 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:24:15.578493   12665 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0916 10:24:15.578593   12665 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0916 10:24:15.607098   12665 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0916 10:24:15.631369   12665 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0916 10:24:15.631420   12665 main.go:141] libmachine: (addons-855148) Calling .GetIP
	I0916 10:24:15.634176   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:24:15.634602   12665 main.go:141] libmachine: (addons-855148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:32:be", ip: ""} in network mk-addons-855148: {Iface:virbr1 ExpiryTime:2024-09-16 11:23:59 +0000 UTC Type:0 Mac:52:54:00:5b:32:be Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-855148 Clientid:01:52:54:00:5b:32:be}
	I0916 10:24:15.634624   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined IP address 192.168.39.55 and MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:24:15.634825   12665 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0916 10:24:15.638338   12665 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:24:15.649697   12665 kubeadm.go:883] updating cluster {Name:addons-855148 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-855148 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.55 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 10:24:15.649803   12665 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 10:24:15.649873   12665 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0916 10:24:15.664931   12665 docker.go:685] Got preloaded images: 
	I0916 10:24:15.664949   12665 docker.go:691] registry.k8s.io/kube-apiserver:v1.31.1 wasn't preloaded
	I0916 10:24:15.665003   12665 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0916 10:24:15.673969   12665 ssh_runner.go:195] Run: which lz4
	I0916 10:24:15.677245   12665 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0916 10:24:15.680729   12665 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0916 10:24:15.680754   12665 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3871/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (342028912 bytes)
	I0916 10:24:16.775118   12665 docker.go:649] duration metric: took 1.097906929s to copy over tarball
	I0916 10:24:16.775188   12665 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0916 10:24:18.550004   12665 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.774786164s)
	I0916 10:24:18.550036   12665 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0916 10:24:18.588003   12665 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0916 10:24:18.597840   12665 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0916 10:24:18.613146   12665 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:24:18.732028   12665 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0916 10:24:22.782514   12665 ssh_runner.go:235] Completed: sudo systemctl restart docker: (4.050440568s)
	I0916 10:24:22.782642   12665 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0916 10:24:22.797893   12665 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0916 10:24:22.797921   12665 cache_images.go:84] Images are preloaded, skipping loading
	I0916 10:24:22.797934   12665 kubeadm.go:934] updating node { 192.168.39.55 8443 v1.31.1 docker true true} ...
	I0916 10:24:22.798068   12665 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-855148 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.55
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-855148 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:24:22.798135   12665 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0916 10:24:22.844009   12665 cni.go:84] Creating CNI manager for ""
	I0916 10:24:22.844036   12665 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 10:24:22.844046   12665 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 10:24:22.844064   12665 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.55 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-855148 NodeName:addons-855148 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.55"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.55 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 10:24:22.844192   12665 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.55
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-855148"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.55
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.55"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 10:24:22.844250   12665 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:24:22.855040   12665 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:24:22.855106   12665 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 10:24:22.865241   12665 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0916 10:24:22.882230   12665 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:24:22.898501   12665 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0916 10:24:22.915446   12665 ssh_runner.go:195] Run: grep 192.168.39.55	control-plane.minikube.internal$ /etc/hosts
	I0916 10:24:22.918903   12665 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.55	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:24:22.932744   12665 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:24:23.050063   12665 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:24:23.068646   12665 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/addons-855148 for IP: 192.168.39.55
	I0916 10:24:23.068690   12665 certs.go:194] generating shared ca certs ...
	I0916 10:24:23.068711   12665 certs.go:226] acquiring lock for ca certs: {Name:mk0c34d3e21299f24249eea5b0e6e60d4e03201c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:24:23.068857   12665 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3871/.minikube/ca.key
	I0916 10:24:23.121559   12665 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3871/.minikube/ca.crt ...
	I0916 10:24:23.121587   12665 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3871/.minikube/ca.crt: {Name:mk17f19e80104f0f9eb997a8b4a3da885e1a6cc2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:24:23.121765   12665 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3871/.minikube/ca.key ...
	I0916 10:24:23.121780   12665 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3871/.minikube/ca.key: {Name:mkd36742c1d368e1449b733fff46cf7c528e21cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:24:23.121883   12665 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3871/.minikube/proxy-client-ca.key
	I0916 10:24:23.240500   12665 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3871/.minikube/proxy-client-ca.crt ...
	I0916 10:24:23.240531   12665 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3871/.minikube/proxy-client-ca.crt: {Name:mk0e7565c6f32797735f6eadbff287668b355d07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:24:23.240724   12665 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3871/.minikube/proxy-client-ca.key ...
	I0916 10:24:23.240738   12665 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3871/.minikube/proxy-client-ca.key: {Name:mk505efbddefa5cd51eb5c2b771928cf72158801 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:24:23.240828   12665 certs.go:256] generating profile certs ...
	I0916 10:24:23.240896   12665 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/addons-855148/client.key
	I0916 10:24:23.240924   12665 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/addons-855148/client.crt with IP's: []
	I0916 10:24:23.284088   12665 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/addons-855148/client.crt ...
	I0916 10:24:23.284169   12665 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/addons-855148/client.crt: {Name:mk232ab992a87be400a9d959474897fc6efb5f4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:24:23.284342   12665 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/addons-855148/client.key ...
	I0916 10:24:23.284361   12665 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/addons-855148/client.key: {Name:mka4c5d5e9f86104c8c3c0b20fdd0591ee956342 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:24:23.284452   12665 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/addons-855148/apiserver.key.788e7335
	I0916 10:24:23.284474   12665 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/addons-855148/apiserver.crt.788e7335 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.55]
	I0916 10:24:23.392717   12665 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/addons-855148/apiserver.crt.788e7335 ...
	I0916 10:24:23.392747   12665 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/addons-855148/apiserver.crt.788e7335: {Name:mk802f51822d9f63f98788e318bd707ec2db673e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:24:23.392920   12665 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/addons-855148/apiserver.key.788e7335 ...
	I0916 10:24:23.392937   12665 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/addons-855148/apiserver.key.788e7335: {Name:mk691937912088b42b71e71ea836d19664c9dac8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:24:23.393024   12665 certs.go:381] copying /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/addons-855148/apiserver.crt.788e7335 -> /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/addons-855148/apiserver.crt
	I0916 10:24:23.393130   12665 certs.go:385] copying /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/addons-855148/apiserver.key.788e7335 -> /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/addons-855148/apiserver.key
	I0916 10:24:23.393204   12665 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/addons-855148/proxy-client.key
	I0916 10:24:23.393227   12665 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/addons-855148/proxy-client.crt with IP's: []
	I0916 10:24:23.780183   12665 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/addons-855148/proxy-client.crt ...
	I0916 10:24:23.780217   12665 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/addons-855148/proxy-client.crt: {Name:mkc618ce4d192171fe5b7aca88f460cdb4816979 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:24:23.780385   12665 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/addons-855148/proxy-client.key ...
	I0916 10:24:23.780397   12665 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/addons-855148/proxy-client.key: {Name:mk3425ec210c881b481016f5fbf5096fa153e540 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:24:23.780552   12665 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3871/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 10:24:23.780606   12665 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3871/.minikube/certs/ca.pem (1078 bytes)
	I0916 10:24:23.780632   12665 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3871/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:24:23.780655   12665 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3871/.minikube/certs/key.pem (1679 bytes)
	I0916 10:24:23.781202   12665 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3871/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:24:23.807988   12665 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3871/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0916 10:24:23.830338   12665 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3871/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:24:23.852374   12665 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3871/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0916 10:24:23.874333   12665 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/addons-855148/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0916 10:24:23.895698   12665 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/addons-855148/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 10:24:23.917279   12665 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/addons-855148/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:24:23.938522   12665 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/addons-855148/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 10:24:23.959535   12665 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3871/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:24:23.980642   12665 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 10:24:23.995441   12665 ssh_runner.go:195] Run: openssl version
	I0916 10:24:24.000595   12665 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:24:24.010214   12665 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:24:24.014216   12665 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:24 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:24:24.014276   12665 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:24:24.019438   12665 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:24:24.029271   12665 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:24:24.032828   12665 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 10:24:24.032880   12665 kubeadm.go:392] StartCluster: {Name:addons-855148 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-855148 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.55 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:24:24.033161   12665 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0916 10:24:24.048798   12665 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 10:24:24.058416   12665 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 10:24:24.067345   12665 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 10:24:24.076238   12665 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 10:24:24.076260   12665 kubeadm.go:157] found existing configuration files:
	
	I0916 10:24:24.076308   12665 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 10:24:24.084445   12665 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 10:24:24.084525   12665 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 10:24:24.093116   12665 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 10:24:24.101502   12665 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 10:24:24.101546   12665 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 10:24:24.109964   12665 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 10:24:24.118055   12665 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 10:24:24.118102   12665 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 10:24:24.126729   12665 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 10:24:24.134952   12665 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 10:24:24.135009   12665 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 10:24:24.143676   12665 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0916 10:24:24.187924   12665 kubeadm.go:310] W0916 10:24:24.170833    1505 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 10:24:24.188605   12665 kubeadm.go:310] W0916 10:24:24.171796    1505 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 10:24:24.283477   12665 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 10:24:34.624127   12665 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0916 10:24:34.624199   12665 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 10:24:34.624317   12665 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 10:24:34.624418   12665 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 10:24:34.624532   12665 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 10:24:34.624628   12665 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 10:24:34.626084   12665 out.go:235]   - Generating certificates and keys ...
	I0916 10:24:34.626166   12665 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 10:24:34.626221   12665 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 10:24:34.626312   12665 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 10:24:34.626376   12665 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 10:24:34.626441   12665 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 10:24:34.626492   12665 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 10:24:34.626536   12665 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 10:24:34.626644   12665 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-855148 localhost] and IPs [192.168.39.55 127.0.0.1 ::1]
	I0916 10:24:34.626694   12665 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 10:24:34.626827   12665 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-855148 localhost] and IPs [192.168.39.55 127.0.0.1 ::1]
	I0916 10:24:34.626899   12665 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 10:24:34.626954   12665 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 10:24:34.626992   12665 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 10:24:34.627039   12665 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 10:24:34.627103   12665 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 10:24:34.627182   12665 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 10:24:34.627262   12665 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 10:24:34.627345   12665 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 10:24:34.627434   12665 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 10:24:34.627541   12665 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 10:24:34.627638   12665 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 10:24:34.628902   12665 out.go:235]   - Booting up control plane ...
	I0916 10:24:34.628989   12665 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 10:24:34.629071   12665 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 10:24:34.629146   12665 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 10:24:34.629263   12665 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 10:24:34.629367   12665 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 10:24:34.629421   12665 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 10:24:34.629555   12665 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 10:24:34.629672   12665 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 10:24:34.629778   12665 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001049051s
	I0916 10:24:34.629884   12665 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 10:24:34.629971   12665 kubeadm.go:310] [api-check] The API server is healthy after 4.501104615s
	I0916 10:24:34.630097   12665 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 10:24:34.630257   12665 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 10:24:34.630345   12665 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 10:24:34.630520   12665 kubeadm.go:310] [mark-control-plane] Marking the node addons-855148 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 10:24:34.630602   12665 kubeadm.go:310] [bootstrap-token] Using token: ceggpe.vup4ji0jtkb1sqs2
	I0916 10:24:34.631672   12665 out.go:235]   - Configuring RBAC rules ...
	I0916 10:24:34.631768   12665 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 10:24:34.631856   12665 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 10:24:34.632005   12665 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 10:24:34.632116   12665 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 10:24:34.632211   12665 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 10:24:34.632330   12665 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 10:24:34.632435   12665 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 10:24:34.632473   12665 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 10:24:34.632516   12665 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 10:24:34.632522   12665 kubeadm.go:310] 
	I0916 10:24:34.632587   12665 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 10:24:34.632601   12665 kubeadm.go:310] 
	I0916 10:24:34.632704   12665 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 10:24:34.632712   12665 kubeadm.go:310] 
	I0916 10:24:34.632747   12665 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 10:24:34.632830   12665 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 10:24:34.632885   12665 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 10:24:34.632892   12665 kubeadm.go:310] 
	I0916 10:24:34.632934   12665 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 10:24:34.632939   12665 kubeadm.go:310] 
	I0916 10:24:34.633010   12665 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 10:24:34.633022   12665 kubeadm.go:310] 
	I0916 10:24:34.633101   12665 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 10:24:34.633162   12665 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 10:24:34.633216   12665 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 10:24:34.633222   12665 kubeadm.go:310] 
	I0916 10:24:34.633303   12665 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 10:24:34.633374   12665 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 10:24:34.633380   12665 kubeadm.go:310] 
	I0916 10:24:34.633450   12665 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ceggpe.vup4ji0jtkb1sqs2 \
	I0916 10:24:34.633530   12665 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:50f56bbb24864618e9caca37b3363f5b56e28d88eab407504c75c246ded43b5f \
	I0916 10:24:34.633549   12665 kubeadm.go:310] 	--control-plane 
	I0916 10:24:34.633554   12665 kubeadm.go:310] 
	I0916 10:24:34.633653   12665 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 10:24:34.633671   12665 kubeadm.go:310] 
	I0916 10:24:34.633785   12665 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ceggpe.vup4ji0jtkb1sqs2 \
	I0916 10:24:34.633895   12665 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:50f56bbb24864618e9caca37b3363f5b56e28d88eab407504c75c246ded43b5f 
	I0916 10:24:34.633916   12665 cni.go:84] Creating CNI manager for ""
	I0916 10:24:34.633934   12665 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 10:24:34.635124   12665 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0916 10:24:34.636206   12665 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0916 10:24:34.645766   12665 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0916 10:24:34.664919   12665 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 10:24:34.664983   12665 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:24:34.665026   12665 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-855148 minikube.k8s.io/updated_at=2024_09_16T10_24_34_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=addons-855148 minikube.k8s.io/primary=true
	I0916 10:24:34.677047   12665 ops.go:34] apiserver oom_adj: -16
	I0916 10:24:34.753458   12665 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:24:35.253900   12665 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:24:35.753658   12665 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:24:36.253997   12665 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:24:36.754563   12665 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:24:37.254197   12665 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:24:37.753786   12665 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:24:38.253566   12665 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:24:38.329646   12665 kubeadm.go:1113] duration metric: took 3.664716104s to wait for elevateKubeSystemPrivileges
	I0916 10:24:38.329681   12665 kubeadm.go:394] duration metric: took 14.296806231s to StartCluster
	I0916 10:24:38.329698   12665 settings.go:142] acquiring lock: {Name:mk788c9fb3f9e1a112a9b25e2b5b2a6a057271c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:24:38.329825   12665 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3871/kubeconfig
	I0916 10:24:38.330280   12665 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3871/kubeconfig: {Name:mk10ecce75f187f7c3758e3e00591ffdfeebdbf9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:24:38.330511   12665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 10:24:38.330507   12665 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.55 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 10:24:38.330532   12665 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0916 10:24:38.330657   12665 addons.go:69] Setting yakd=true in profile "addons-855148"
	I0916 10:24:38.330678   12665 addons.go:234] Setting addon yakd=true in "addons-855148"
	I0916 10:24:38.330702   12665 host.go:66] Checking if "addons-855148" exists ...
	I0916 10:24:38.330727   12665 addons.go:69] Setting ingress-dns=true in profile "addons-855148"
	I0916 10:24:38.330733   12665 config.go:182] Loaded profile config "addons-855148": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:24:38.330740   12665 addons.go:69] Setting default-storageclass=true in profile "addons-855148"
	I0916 10:24:38.330758   12665 addons.go:69] Setting gcp-auth=true in profile "addons-855148"
	I0916 10:24:38.330782   12665 mustload.go:65] Loading cluster: addons-855148
	I0916 10:24:38.330749   12665 addons.go:234] Setting addon ingress-dns=true in "addons-855148"
	I0916 10:24:38.330789   12665 addons.go:69] Setting helm-tiller=true in profile "addons-855148"
	I0916 10:24:38.330802   12665 addons.go:234] Setting addon helm-tiller=true in "addons-855148"
	I0916 10:24:38.330823   12665 host.go:66] Checking if "addons-855148" exists ...
	I0916 10:24:38.330842   12665 host.go:66] Checking if "addons-855148" exists ...
	I0916 10:24:38.330967   12665 config.go:182] Loaded profile config "addons-855148": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:24:38.331159   12665 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 10:24:38.331172   12665 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 10:24:38.331211   12665 addons.go:69] Setting metrics-server=true in profile "addons-855148"
	I0916 10:24:38.331214   12665 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 10:24:38.331221   12665 addons.go:234] Setting addon metrics-server=true in "addons-855148"
	I0916 10:24:38.331247   12665 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:24:38.331257   12665 host.go:66] Checking if "addons-855148" exists ...
	I0916 10:24:38.331293   12665 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 10:24:38.331312   12665 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-855148"
	I0916 10:24:38.331322   12665 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:24:38.331332   12665 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-855148"
	I0916 10:24:38.331344   12665 addons.go:69] Setting cloud-spanner=true in profile "addons-855148"
	I0916 10:24:38.331363   12665 addons.go:234] Setting addon cloud-spanner=true in "addons-855148"
	I0916 10:24:38.331365   12665 host.go:66] Checking if "addons-855148" exists ...
	I0916 10:24:38.331378   12665 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-855148"
	I0916 10:24:38.331390   12665 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-855148"
	I0916 10:24:38.331600   12665 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 10:24:38.331663   12665 host.go:66] Checking if "addons-855148" exists ...
	I0916 10:24:38.331199   12665 addons.go:69] Setting registry=true in profile "addons-855148"
	I0916 10:24:38.331764   12665 addons.go:234] Setting addon registry=true in "addons-855148"
	I0916 10:24:38.331770   12665 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 10:24:38.331790   12665 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 10:24:38.331798   12665 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:24:38.331817   12665 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:24:38.330782   12665 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-855148"
	I0916 10:24:38.331793   12665 host.go:66] Checking if "addons-855148" exists ...
	I0916 10:24:38.331605   12665 addons.go:69] Setting volcano=true in profile "addons-855148"
	I0916 10:24:38.332056   12665 addons.go:234] Setting addon volcano=true in "addons-855148"
	I0916 10:24:38.332083   12665 host.go:66] Checking if "addons-855148" exists ...
	I0916 10:24:38.331200   12665 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:24:38.331205   12665 addons.go:69] Setting inspektor-gadget=true in profile "addons-855148"
	I0916 10:24:38.332198   12665 addons.go:234] Setting addon inspektor-gadget=true in "addons-855148"
	I0916 10:24:38.332221   12665 host.go:66] Checking if "addons-855148" exists ...
	I0916 10:24:38.332226   12665 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 10:24:38.332249   12665 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:24:38.331591   12665 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-855148"
	I0916 10:24:38.332414   12665 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 10:24:38.332441   12665 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:24:38.332414   12665 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-855148"
	I0916 10:24:38.331628   12665 addons.go:69] Setting ingress=true in profile "addons-855148"
	I0916 10:24:38.332499   12665 addons.go:234] Setting addon ingress=true in "addons-855148"
	I0916 10:24:38.332530   12665 host.go:66] Checking if "addons-855148" exists ...
	I0916 10:24:38.332535   12665 host.go:66] Checking if "addons-855148" exists ...
	I0916 10:24:38.332540   12665 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 10:24:38.332564   12665 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:24:38.332579   12665 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 10:24:38.332616   12665 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:24:38.331189   12665 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:24:38.331635   12665 addons.go:69] Setting volumesnapshots=true in profile "addons-855148"
	I0916 10:24:38.333020   12665 addons.go:234] Setting addon volumesnapshots=true in "addons-855148"
	I0916 10:24:38.331667   12665 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:24:38.331621   12665 addons.go:69] Setting storage-provisioner=true in profile "addons-855148"
	I0916 10:24:38.333257   12665 addons.go:234] Setting addon storage-provisioner=true in "addons-855148"
	I0916 10:24:38.332485   12665 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 10:24:38.333293   12665 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:24:38.333685   12665 host.go:66] Checking if "addons-855148" exists ...
	I0916 10:24:38.333973   12665 host.go:66] Checking if "addons-855148" exists ...
	I0916 10:24:38.336624   12665 out.go:177] * Verifying Kubernetes components...
	I0916 10:24:38.343495   12665 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:24:38.349759   12665 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37331
	I0916 10:24:38.350273   12665 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:24:38.350757   12665 main.go:141] libmachine: Using API Version  1
	I0916 10:24:38.350776   12665 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:24:38.351107   12665 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:24:38.351635   12665 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 10:24:38.351674   12665 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:24:38.353951   12665 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41993
	I0916 10:24:38.354097   12665 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41075
	I0916 10:24:38.354377   12665 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:24:38.354585   12665 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:24:38.354929   12665 main.go:141] libmachine: Using API Version  1
	I0916 10:24:38.354950   12665 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:24:38.355032   12665 main.go:141] libmachine: Using API Version  1
	I0916 10:24:38.355049   12665 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:24:38.355410   12665 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:24:38.355421   12665 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:24:38.355562   12665 main.go:141] libmachine: (addons-855148) Calling .GetState
	I0916 10:24:38.356595   12665 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 10:24:38.356641   12665 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:24:38.357687   12665 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36385
	I0916 10:24:38.358292   12665 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:24:38.358789   12665 main.go:141] libmachine: Using API Version  1
	I0916 10:24:38.358818   12665 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:24:38.359153   12665 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:24:38.359371   12665 main.go:141] libmachine: (addons-855148) Calling .GetState
	I0916 10:24:38.360942   12665 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-855148"
	I0916 10:24:38.360990   12665 host.go:66] Checking if "addons-855148" exists ...
	I0916 10:24:38.361374   12665 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 10:24:38.361409   12665 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:24:38.362338   12665 host.go:66] Checking if "addons-855148" exists ...
	I0916 10:24:38.362694   12665 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 10:24:38.362720   12665 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:24:38.363128   12665 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38801
	I0916 10:24:38.364067   12665 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33155
	I0916 10:24:38.375712   12665 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 10:24:38.375748   12665 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 10:24:38.375754   12665 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:24:38.375773   12665 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:24:38.375798   12665 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 10:24:38.375837   12665 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:24:38.380049   12665 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 10:24:38.380096   12665 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:24:38.382124   12665 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40903
	I0916 10:24:38.382262   12665 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42045
	I0916 10:24:38.382626   12665 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:24:38.382742   12665 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:24:38.382801   12665 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:24:38.383173   12665 main.go:141] libmachine: Using API Version  1
	I0916 10:24:38.383195   12665 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:24:38.383316   12665 main.go:141] libmachine: Using API Version  1
	I0916 10:24:38.383331   12665 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:24:38.383343   12665 main.go:141] libmachine: Using API Version  1
	I0916 10:24:38.383356   12665 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:24:38.383402   12665 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:24:38.383786   12665 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:24:38.383852   12665 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:24:38.384082   12665 main.go:141] libmachine: (addons-855148) Calling .GetState
	I0916 10:24:38.384381   12665 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 10:24:38.384427   12665 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:24:38.385023   12665 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:24:38.385193   12665 main.go:141] libmachine: Using API Version  1
	I0916 10:24:38.385206   12665 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:24:38.387414   12665 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34199
	I0916 10:24:38.387511   12665 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:24:38.387572   12665 main.go:141] libmachine: (addons-855148) Calling .GetState
	I0916 10:24:38.387733   12665 main.go:141] libmachine: (addons-855148) Calling .GetState
	I0916 10:24:38.389704   12665 main.go:141] libmachine: (addons-855148) Calling .DriverName
	I0916 10:24:38.390363   12665 main.go:141] libmachine: (addons-855148) Calling .DriverName
	I0916 10:24:38.391871   12665 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0916 10:24:38.391924   12665 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0916 10:24:38.393041   12665 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0916 10:24:38.393062   12665 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0916 10:24:38.393080   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHHostname
	I0916 10:24:38.393693   12665 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0916 10:24:38.393707   12665 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0916 10:24:38.393722   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHHostname
	I0916 10:24:38.397995   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:24:38.399781   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:24:38.400409   12665 main.go:141] libmachine: (addons-855148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:32:be", ip: ""} in network mk-addons-855148: {Iface:virbr1 ExpiryTime:2024-09-16 11:23:59 +0000 UTC Type:0 Mac:52:54:00:5b:32:be Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-855148 Clientid:01:52:54:00:5b:32:be}
	I0916 10:24:38.400429   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined IP address 192.168.39.55 and MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:24:38.400869   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHPort
	I0916 10:24:38.401056   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHKeyPath
	I0916 10:24:38.401112   12665 main.go:141] libmachine: (addons-855148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:32:be", ip: ""} in network mk-addons-855148: {Iface:virbr1 ExpiryTime:2024-09-16 11:23:59 +0000 UTC Type:0 Mac:52:54:00:5b:32:be Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-855148 Clientid:01:52:54:00:5b:32:be}
	I0916 10:24:38.401126   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined IP address 192.168.39.55 and MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:24:38.401201   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHUsername
	I0916 10:24:38.401284   12665 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3871/.minikube/machines/addons-855148/id_rsa Username:docker}
	I0916 10:24:38.401412   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHPort
	I0916 10:24:38.401469   12665 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37027
	I0916 10:24:38.401729   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHKeyPath
	I0916 10:24:38.401982   12665 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:24:38.402083   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHUsername
	I0916 10:24:38.402246   12665 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3871/.minikube/machines/addons-855148/id_rsa Username:docker}
	I0916 10:24:38.402766   12665 main.go:141] libmachine: Using API Version  1
	I0916 10:24:38.402782   12665 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:24:38.403097   12665 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:24:38.403262   12665 main.go:141] libmachine: (addons-855148) Calling .DriverName
	I0916 10:24:38.410048   12665 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43095
	I0916 10:24:38.410577   12665 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:24:38.411112   12665 main.go:141] libmachine: Using API Version  1
	I0916 10:24:38.411129   12665 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:24:38.411679   12665 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:24:38.412217   12665 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 10:24:38.412293   12665 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:24:38.415035   12665 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44423
	I0916 10:24:38.415499   12665 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:24:38.415851   12665 main.go:141] libmachine: Using API Version  1
	I0916 10:24:38.415864   12665 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:24:38.416202   12665 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:24:38.416738   12665 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 10:24:38.416774   12665 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:24:38.416982   12665 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41821
	I0916 10:24:38.417358   12665 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:24:38.417538   12665 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40445
	I0916 10:24:38.417812   12665 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:24:38.417992   12665 main.go:141] libmachine: Using API Version  1
	I0916 10:24:38.418013   12665 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:24:38.418312   12665 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:24:38.418420   12665 main.go:141] libmachine: Using API Version  1
	I0916 10:24:38.418440   12665 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:24:38.418710   12665 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 10:24:38.418739   12665 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:24:38.418749   12665 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:24:38.419223   12665 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 10:24:38.419266   12665 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:24:38.421586   12665 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40247
	I0916 10:24:38.421596   12665 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45405
	I0916 10:24:38.421897   12665 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:24:38.421988   12665 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:24:38.422840   12665 main.go:141] libmachine: Using API Version  1
	I0916 10:24:38.422869   12665 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:24:38.423006   12665 main.go:141] libmachine: Using API Version  1
	I0916 10:24:38.423028   12665 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:24:38.423350   12665 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:24:38.423364   12665 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:24:38.423913   12665 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 10:24:38.423951   12665 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:24:38.424329   12665 addons.go:234] Setting addon default-storageclass=true in "addons-855148"
	I0916 10:24:38.424374   12665 host.go:66] Checking if "addons-855148" exists ...
	I0916 10:24:38.424710   12665 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 10:24:38.424748   12665 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:24:38.425331   12665 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 10:24:38.425356   12665 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:24:38.425565   12665 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:24:38.426120   12665 main.go:141] libmachine: Using API Version  1
	I0916 10:24:38.426135   12665 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:24:38.426592   12665 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39417
	I0916 10:24:38.426854   12665 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:24:38.427044   12665 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:24:38.427540   12665 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 10:24:38.427579   12665 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:24:38.428036   12665 main.go:141] libmachine: Using API Version  1
	I0916 10:24:38.428051   12665 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:24:38.428349   12665 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:24:38.428842   12665 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 10:24:38.428875   12665 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:24:38.430366   12665 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43135
	I0916 10:24:38.440826   12665 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:24:38.441371   12665 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39511
	I0916 10:24:38.441697   12665 main.go:141] libmachine: Using API Version  1
	I0916 10:24:38.441719   12665 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:24:38.441800   12665 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:24:38.442216   12665 main.go:141] libmachine: Using API Version  1
	I0916 10:24:38.442230   12665 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:24:38.442456   12665 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:24:38.443045   12665 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 10:24:38.443084   12665 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:24:38.443370   12665 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:24:38.443412   12665 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34373
	I0916 10:24:38.443720   12665 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:24:38.444146   12665 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 10:24:38.444171   12665 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:24:38.444423   12665 main.go:141] libmachine: Using API Version  1
	I0916 10:24:38.444441   12665 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:24:38.444757   12665 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:24:38.444981   12665 main.go:141] libmachine: (addons-855148) Calling .GetState
	I0916 10:24:38.448387   12665 main.go:141] libmachine: (addons-855148) Calling .DriverName
	I0916 10:24:38.450395   12665 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0916 10:24:38.451536   12665 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0916 10:24:38.451553   12665 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0916 10:24:38.451574   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHHostname
	I0916 10:24:38.453987   12665 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34543
	I0916 10:24:38.454957   12665 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:24:38.455266   12665 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39305
	I0916 10:24:38.455500   12665 main.go:141] libmachine: Using API Version  1
	I0916 10:24:38.455514   12665 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:24:38.455580   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:24:38.455905   12665 main.go:141] libmachine: (addons-855148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:32:be", ip: ""} in network mk-addons-855148: {Iface:virbr1 ExpiryTime:2024-09-16 11:23:59 +0000 UTC Type:0 Mac:52:54:00:5b:32:be Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-855148 Clientid:01:52:54:00:5b:32:be}
	I0916 10:24:38.455930   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined IP address 192.168.39.55 and MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:24:38.456124   12665 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:24:38.456178   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHPort
	I0916 10:24:38.456308   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHKeyPath
	I0916 10:24:38.456409   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHUsername
	I0916 10:24:38.456471   12665 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:24:38.456567   12665 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39703
	I0916 10:24:38.456713   12665 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43633
	I0916 10:24:38.456973   12665 main.go:141] libmachine: Using API Version  1
	I0916 10:24:38.456991   12665 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:24:38.457063   12665 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:24:38.457163   12665 main.go:141] libmachine: (addons-855148) Calling .GetState
	I0916 10:24:38.457205   12665 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3871/.minikube/machines/addons-855148/id_rsa Username:docker}
	I0916 10:24:38.457813   12665 main.go:141] libmachine: Using API Version  1
	I0916 10:24:38.457841   12665 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:24:38.457916   12665 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:24:38.457944   12665 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:24:38.458337   12665 main.go:141] libmachine: (addons-855148) Calling .GetState
	I0916 10:24:38.458442   12665 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:24:38.459100   12665 main.go:141] libmachine: (addons-855148) Calling .DriverName
	I0916 10:24:38.459153   12665 main.go:141] libmachine: Using API Version  1
	I0916 10:24:38.459166   12665 main.go:141] libmachine: (addons-855148) Calling .GetState
	I0916 10:24:38.459167   12665 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:24:38.459645   12665 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:24:38.460374   12665 main.go:141] libmachine: (addons-855148) Calling .DriverName
	I0916 10:24:38.461146   12665 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 10:24:38.461187   12665 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:24:38.461394   12665 main.go:141] libmachine: (addons-855148) Calling .DriverName
	I0916 10:24:38.461458   12665 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0916 10:24:38.462136   12665 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42457
	I0916 10:24:38.462602   12665 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0916 10:24:38.462661   12665 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0916 10:24:38.462702   12665 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:24:38.463292   12665 main.go:141] libmachine: Using API Version  1
	I0916 10:24:38.463308   12665 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:24:38.463396   12665 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0916 10:24:38.463424   12665 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0916 10:24:38.463445   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHHostname
	I0916 10:24:38.463936   12665 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:24:38.463988   12665 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34027
	I0916 10:24:38.464175   12665 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0916 10:24:38.464467   12665 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0916 10:24:38.464486   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHHostname
	I0916 10:24:38.464573   12665 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 10:24:38.464617   12665 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:24:38.465216   12665 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0916 10:24:38.465471   12665 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:24:38.466003   12665 main.go:141] libmachine: Using API Version  1
	I0916 10:24:38.466018   12665 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:24:38.466391   12665 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:24:38.466538   12665 main.go:141] libmachine: (addons-855148) Calling .GetState
	I0916 10:24:38.467019   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:24:38.467277   12665 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0916 10:24:38.467300   12665 main.go:141] libmachine: (addons-855148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:32:be", ip: ""} in network mk-addons-855148: {Iface:virbr1 ExpiryTime:2024-09-16 11:23:59 +0000 UTC Type:0 Mac:52:54:00:5b:32:be Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-855148 Clientid:01:52:54:00:5b:32:be}
	I0916 10:24:38.467337   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined IP address 192.168.39.55 and MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:24:38.467605   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHPort
	I0916 10:24:38.467755   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHKeyPath
	I0916 10:24:38.467872   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHUsername
	I0916 10:24:38.468030   12665 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3871/.minikube/machines/addons-855148/id_rsa Username:docker}
	I0916 10:24:38.468546   12665 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0916 10:24:38.468564   12665 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0916 10:24:38.468581   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHHostname
	I0916 10:24:38.468717   12665 main.go:141] libmachine: (addons-855148) Calling .DriverName
	I0916 10:24:38.469023   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:24:38.469499   12665 main.go:141] libmachine: (addons-855148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:32:be", ip: ""} in network mk-addons-855148: {Iface:virbr1 ExpiryTime:2024-09-16 11:23:59 +0000 UTC Type:0 Mac:52:54:00:5b:32:be Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-855148 Clientid:01:52:54:00:5b:32:be}
	I0916 10:24:38.469527   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined IP address 192.168.39.55 and MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:24:38.469653   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHPort
	I0916 10:24:38.469820   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHKeyPath
	I0916 10:24:38.469962   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHUsername
	I0916 10:24:38.470103   12665 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3871/.minikube/machines/addons-855148/id_rsa Username:docker}
	I0916 10:24:38.470602   12665 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0916 10:24:38.472112   12665 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0916 10:24:38.472128   12665 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0916 10:24:38.472145   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHHostname
	I0916 10:24:38.472305   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:24:38.472717   12665 main.go:141] libmachine: (addons-855148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:32:be", ip: ""} in network mk-addons-855148: {Iface:virbr1 ExpiryTime:2024-09-16 11:23:59 +0000 UTC Type:0 Mac:52:54:00:5b:32:be Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-855148 Clientid:01:52:54:00:5b:32:be}
	I0916 10:24:38.472736   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined IP address 192.168.39.55 and MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:24:38.472950   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHPort
	I0916 10:24:38.473102   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHKeyPath
	I0916 10:24:38.473268   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHUsername
	I0916 10:24:38.473419   12665 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3871/.minikube/machines/addons-855148/id_rsa Username:docker}
	I0916 10:24:38.473823   12665 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39677
	I0916 10:24:38.474018   12665 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36775
	I0916 10:24:38.474230   12665 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:24:38.474476   12665 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:24:38.475029   12665 main.go:141] libmachine: Using API Version  1
	I0916 10:24:38.475051   12665 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:24:38.475426   12665 main.go:141] libmachine: Using API Version  1
	I0916 10:24:38.475444   12665 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:24:38.475513   12665 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:24:38.475556   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:24:38.475706   12665 main.go:141] libmachine: (addons-855148) Calling .GetState
	I0916 10:24:38.475888   12665 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:24:38.475961   12665 main.go:141] libmachine: (addons-855148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:32:be", ip: ""} in network mk-addons-855148: {Iface:virbr1 ExpiryTime:2024-09-16 11:23:59 +0000 UTC Type:0 Mac:52:54:00:5b:32:be Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-855148 Clientid:01:52:54:00:5b:32:be}
	I0916 10:24:38.475975   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined IP address 192.168.39.55 and MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:24:38.476130   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHPort
	I0916 10:24:38.476286   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHKeyPath
	I0916 10:24:38.476421   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHUsername
	I0916 10:24:38.476541   12665 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3871/.minikube/machines/addons-855148/id_rsa Username:docker}
	I0916 10:24:38.477034   12665 main.go:141] libmachine: (addons-855148) Calling .GetState
	I0916 10:24:38.477212   12665 main.go:141] libmachine: (addons-855148) Calling .DriverName
	I0916 10:24:38.478668   12665 main.go:141] libmachine: (addons-855148) Calling .DriverName
	I0916 10:24:38.478907   12665 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0916 10:24:38.479940   12665 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0916 10:24:38.481170   12665 out.go:177]   - Using image docker.io/busybox:stable
	I0916 10:24:38.482125   12665 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0916 10:24:38.482232   12665 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0916 10:24:38.482247   12665 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0916 10:24:38.482264   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHHostname
	I0916 10:24:38.484271   12665 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0916 10:24:38.485368   12665 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0916 10:24:38.486048   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:24:38.486239   12665 main.go:141] libmachine: (addons-855148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:32:be", ip: ""} in network mk-addons-855148: {Iface:virbr1 ExpiryTime:2024-09-16 11:23:59 +0000 UTC Type:0 Mac:52:54:00:5b:32:be Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-855148 Clientid:01:52:54:00:5b:32:be}
	I0916 10:24:38.486256   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined IP address 192.168.39.55 and MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:24:38.486409   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHPort
	I0916 10:24:38.486531   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHKeyPath
	I0916 10:24:38.486665   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHUsername
	I0916 10:24:38.486773   12665 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3871/.minikube/machines/addons-855148/id_rsa Username:docker}
	I0916 10:24:38.487207   12665 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0916 10:24:38.487507   12665 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42345
	I0916 10:24:38.488554   12665 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:24:38.489684   12665 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0916 10:24:38.490539   12665 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0916 10:24:38.491377   12665 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36345
	I0916 10:24:38.494028   12665 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:24:38.494050   12665 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0916 10:24:38.494277   12665 main.go:141] libmachine: Using API Version  1
	I0916 10:24:38.494309   12665 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:24:38.494792   12665 main.go:141] libmachine: Using API Version  1
	I0916 10:24:38.494810   12665 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:24:38.495026   12665 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0916 10:24:38.495050   12665 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0916 10:24:38.495071   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHHostname
	I0916 10:24:38.495749   12665 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:24:38.495813   12665 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:24:38.496130   12665 main.go:141] libmachine: (addons-855148) Calling .GetState
	I0916 10:24:38.496186   12665 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33931
	I0916 10:24:38.496796   12665 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:24:38.497476   12665 main.go:141] libmachine: Using API Version  1
	I0916 10:24:38.497496   12665 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:24:38.497567   12665 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35883
	I0916 10:24:38.498219   12665 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:24:38.498238   12665 main.go:141] libmachine: (addons-855148) Calling .GetState
	I0916 10:24:38.498394   12665 main.go:141] libmachine: (addons-855148) Calling .GetState
	I0916 10:24:38.498752   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:24:38.499161   12665 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:24:38.499692   12665 main.go:141] libmachine: (addons-855148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:32:be", ip: ""} in network mk-addons-855148: {Iface:virbr1 ExpiryTime:2024-09-16 11:23:59 +0000 UTC Type:0 Mac:52:54:00:5b:32:be Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-855148 Clientid:01:52:54:00:5b:32:be}
	I0916 10:24:38.499720   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined IP address 192.168.39.55 and MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:24:38.500200   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHPort
	I0916 10:24:38.500473   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHKeyPath
	I0916 10:24:38.500480   12665 main.go:141] libmachine: Using API Version  1
	I0916 10:24:38.500495   12665 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:24:38.500691   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHUsername
	I0916 10:24:38.500880   12665 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3871/.minikube/machines/addons-855148/id_rsa Username:docker}
	I0916 10:24:38.501137   12665 main.go:141] libmachine: (addons-855148) Calling .DriverName
	I0916 10:24:38.501560   12665 main.go:141] libmachine: (addons-855148) Calling .DriverName
	I0916 10:24:38.501641   12665 main.go:141] libmachine: (addons-855148) Calling .DriverName
	I0916 10:24:38.501995   12665 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:24:38.502170   12665 main.go:141] libmachine: (addons-855148) Calling .GetState
	I0916 10:24:38.503272   12665 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44505
	I0916 10:24:38.503276   12665 out.go:177]   - Using image docker.io/registry:2.8.3
	I0916 10:24:38.503599   12665 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:24:38.503672   12665 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 10:24:38.503682   12665 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 10:24:38.503692   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHHostname
	I0916 10:24:38.503700   12665 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 10:24:38.504235   12665 main.go:141] libmachine: (addons-855148) Calling .DriverName
	I0916 10:24:38.504308   12665 main.go:141] libmachine: Using API Version  1
	I0916 10:24:38.504762   12665 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:24:38.505045   12665 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:24:38.505147   12665 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:24:38.505160   12665 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 10:24:38.505175   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHHostname
	I0916 10:24:38.505213   12665 main.go:141] libmachine: (addons-855148) Calling .GetState
	I0916 10:24:38.506383   12665 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0916 10:24:38.506426   12665 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0916 10:24:38.506758   12665 main.go:141] libmachine: (addons-855148) Calling .DriverName
	I0916 10:24:38.507418   12665 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0916 10:24:38.507429   12665 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0916 10:24:38.507440   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHHostname
	I0916 10:24:38.507910   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:24:38.508086   12665 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0916 10:24:38.508146   12665 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0916 10:24:38.508153   12665 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0916 10:24:38.508186   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHHostname
	I0916 10:24:38.508954   12665 main.go:141] libmachine: (addons-855148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:32:be", ip: ""} in network mk-addons-855148: {Iface:virbr1 ExpiryTime:2024-09-16 11:23:59 +0000 UTC Type:0 Mac:52:54:00:5b:32:be Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-855148 Clientid:01:52:54:00:5b:32:be}
	I0916 10:24:38.508966   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined IP address 192.168.39.55 and MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:24:38.509103   12665 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0916 10:24:38.509117   12665 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0916 10:24:38.509134   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHHostname
	I0916 10:24:38.509330   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHPort
	I0916 10:24:38.509495   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHKeyPath
	I0916 10:24:38.509627   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHUsername
	I0916 10:24:38.509759   12665 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3871/.minikube/machines/addons-855148/id_rsa Username:docker}
	I0916 10:24:38.511818   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:24:38.512265   12665 main.go:141] libmachine: (addons-855148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:32:be", ip: ""} in network mk-addons-855148: {Iface:virbr1 ExpiryTime:2024-09-16 11:23:59 +0000 UTC Type:0 Mac:52:54:00:5b:32:be Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-855148 Clientid:01:52:54:00:5b:32:be}
	I0916 10:24:38.512278   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined IP address 192.168.39.55 and MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:24:38.512426   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHPort
	I0916 10:24:38.512619   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHKeyPath
	I0916 10:24:38.512688   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:24:38.512755   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHUsername
	I0916 10:24:38.512908   12665 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3871/.minikube/machines/addons-855148/id_rsa Username:docker}
	I0916 10:24:38.513319   12665 main.go:141] libmachine: (addons-855148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:32:be", ip: ""} in network mk-addons-855148: {Iface:virbr1 ExpiryTime:2024-09-16 11:23:59 +0000 UTC Type:0 Mac:52:54:00:5b:32:be Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-855148 Clientid:01:52:54:00:5b:32:be}
	I0916 10:24:38.513343   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined IP address 192.168.39.55 and MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:24:38.513488   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHPort
	I0916 10:24:38.513603   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHKeyPath
	I0916 10:24:38.513715   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHUsername
	I0916 10:24:38.513852   12665 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3871/.minikube/machines/addons-855148/id_rsa Username:docker}
	I0916 10:24:38.514078   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:24:38.514391   12665 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46749
	I0916 10:24:38.514406   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined MAC address 52:54:00:5b:32:be in network mk-addons-855148
	W0916 10:24:38.514756   12665 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:56310->192.168.39.55:22: read: connection reset by peer
	I0916 10:24:38.514782   12665 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:24:38.514785   12665 retry.go:31] will retry after 299.024089ms: ssh: handshake failed: read tcp 192.168.39.1:56310->192.168.39.55:22: read: connection reset by peer
	I0916 10:24:38.514844   12665 main.go:141] libmachine: (addons-855148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:32:be", ip: ""} in network mk-addons-855148: {Iface:virbr1 ExpiryTime:2024-09-16 11:23:59 +0000 UTC Type:0 Mac:52:54:00:5b:32:be Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-855148 Clientid:01:52:54:00:5b:32:be}
	I0916 10:24:38.514859   12665 main.go:141] libmachine: (addons-855148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:32:be", ip: ""} in network mk-addons-855148: {Iface:virbr1 ExpiryTime:2024-09-16 11:23:59 +0000 UTC Type:0 Mac:52:54:00:5b:32:be Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-855148 Clientid:01:52:54:00:5b:32:be}
	I0916 10:24:38.514873   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined IP address 192.168.39.55 and MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:24:38.514874   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined IP address 192.168.39.55 and MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:24:38.515149   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHPort
	I0916 10:24:38.515194   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHPort
	I0916 10:24:38.515348   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHKeyPath
	I0916 10:24:38.515353   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHKeyPath
	I0916 10:24:38.515371   12665 main.go:141] libmachine: Using API Version  1
	I0916 10:24:38.515387   12665 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:24:38.515488   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHUsername
	I0916 10:24:38.515491   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHUsername
	I0916 10:24:38.515597   12665 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3871/.minikube/machines/addons-855148/id_rsa Username:docker}
	I0916 10:24:38.515617   12665 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3871/.minikube/machines/addons-855148/id_rsa Username:docker}
	I0916 10:24:38.516064   12665 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:24:38.516219   12665 main.go:141] libmachine: (addons-855148) Calling .GetState
	I0916 10:24:38.517444   12665 main.go:141] libmachine: (addons-855148) Calling .DriverName
	I0916 10:24:38.518957   12665 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0916 10:24:38.520081   12665 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0916 10:24:38.521168   12665 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0916 10:24:38.523326   12665 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0916 10:24:38.523350   12665 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0916 10:24:38.523368   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHHostname
	I0916 10:24:38.526497   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:24:38.526944   12665 main.go:141] libmachine: (addons-855148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:32:be", ip: ""} in network mk-addons-855148: {Iface:virbr1 ExpiryTime:2024-09-16 11:23:59 +0000 UTC Type:0 Mac:52:54:00:5b:32:be Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-855148 Clientid:01:52:54:00:5b:32:be}
	I0916 10:24:38.526965   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined IP address 192.168.39.55 and MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:24:38.527153   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHPort
	I0916 10:24:38.527344   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHKeyPath
	I0916 10:24:38.527482   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHUsername
	I0916 10:24:38.527612   12665 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3871/.minikube/machines/addons-855148/id_rsa Username:docker}
	I0916 10:24:38.713961   12665 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:24:38.714004   12665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 10:24:38.745149   12665 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0916 10:24:38.745172   12665 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0916 10:24:38.780517   12665 node_ready.go:35] waiting up to 6m0s for node "addons-855148" to be "Ready" ...
	I0916 10:24:38.783690   12665 node_ready.go:49] node "addons-855148" has status "Ready":"True"
	I0916 10:24:38.783712   12665 node_ready.go:38] duration metric: took 3.159031ms for node "addons-855148" to be "Ready" ...
	I0916 10:24:38.783722   12665 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:24:38.788954   12665 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-855148" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:38.797305   12665 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0916 10:24:38.799412   12665 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0916 10:24:38.799425   12665 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0916 10:24:38.849661   12665 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0916 10:24:38.870352   12665 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0916 10:24:38.870381   12665 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0916 10:24:38.876168   12665 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0916 10:24:38.876191   12665 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0916 10:24:38.898097   12665 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0916 10:24:38.969220   12665 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0916 10:24:38.982010   12665 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0916 10:24:38.994387   12665 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0916 10:24:38.994415   12665 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0916 10:24:38.998815   12665 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0916 10:24:38.998846   12665 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0916 10:24:39.033102   12665 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0916 10:24:39.033124   12665 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0916 10:24:39.039819   12665 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0916 10:24:39.039843   12665 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0916 10:24:39.049009   12665 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:24:39.081286   12665 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 10:24:39.100882   12665 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0916 10:24:39.100904   12665 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0916 10:24:39.103639   12665 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0916 10:24:39.103662   12665 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0916 10:24:39.113100   12665 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0916 10:24:39.191742   12665 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0916 10:24:39.191771   12665 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0916 10:24:39.230822   12665 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0916 10:24:39.230844   12665 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0916 10:24:39.321586   12665 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0916 10:24:39.340786   12665 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0916 10:24:39.340817   12665 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0916 10:24:39.419150   12665 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0916 10:24:39.419178   12665 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0916 10:24:39.456867   12665 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0916 10:24:39.695893   12665 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 10:24:39.695918   12665 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0916 10:24:39.711169   12665 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0916 10:24:39.711189   12665 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0916 10:24:39.714999   12665 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0916 10:24:39.715016   12665 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0916 10:24:39.832293   12665 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0916 10:24:39.832314   12665 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0916 10:24:39.870410   12665 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0916 10:24:40.018288   12665 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0916 10:24:40.018315   12665 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0916 10:24:40.021101   12665 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0916 10:24:40.021126   12665 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0916 10:24:40.220740   12665 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 10:24:40.416908   12665 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0916 10:24:40.416935   12665 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0916 10:24:40.695128   12665 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0916 10:24:40.695152   12665 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0916 10:24:40.747752   12665 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 10:24:40.747776   12665 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0916 10:24:40.798564   12665 pod_ready.go:103] pod "etcd-addons-855148" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:40.986228   12665 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0916 10:24:40.986255   12665 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0916 10:24:41.092211   12665 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0916 10:24:41.092240   12665 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0916 10:24:41.096691   12665 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 10:24:41.152389   12665 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0916 10:24:41.152416   12665 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0916 10:24:41.285932   12665 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0916 10:24:41.285960   12665 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0916 10:24:41.490829   12665 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0916 10:24:41.490859   12665 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0916 10:24:41.720515   12665 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0916 10:24:41.720538   12665 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0916 10:24:41.757271   12665 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0916 10:24:41.757296   12665 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0916 10:24:42.021788   12665 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0916 10:24:42.090910   12665 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.37687713s)
	I0916 10:24:42.090953   12665 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0916 10:24:42.091002   12665 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.293665129s)
	I0916 10:24:42.091043   12665 main.go:141] libmachine: Making call to close driver server
	I0916 10:24:42.091056   12665 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.241365592s)
	I0916 10:24:42.091089   12665 main.go:141] libmachine: Making call to close driver server
	I0916 10:24:42.091102   12665 main.go:141] libmachine: (addons-855148) Calling .Close
	I0916 10:24:42.091064   12665 main.go:141] libmachine: (addons-855148) Calling .Close
	I0916 10:24:42.091371   12665 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:24:42.091390   12665 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:24:42.091399   12665 main.go:141] libmachine: Making call to close driver server
	I0916 10:24:42.091406   12665 main.go:141] libmachine: (addons-855148) Calling .Close
	I0916 10:24:42.091487   12665 main.go:141] libmachine: (addons-855148) DBG | Closing plugin on server side
	I0916 10:24:42.091622   12665 main.go:141] libmachine: (addons-855148) DBG | Closing plugin on server side
	I0916 10:24:42.091621   12665 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:24:42.091644   12665 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:24:42.092848   12665 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:24:42.092871   12665 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:24:42.092890   12665 main.go:141] libmachine: Making call to close driver server
	I0916 10:24:42.092901   12665 main.go:141] libmachine: (addons-855148) Calling .Close
	I0916 10:24:42.093119   12665 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:24:42.093133   12665 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:24:42.093168   12665 main.go:141] libmachine: (addons-855148) DBG | Closing plugin on server side
	I0916 10:24:42.117044   12665 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0916 10:24:42.117067   12665 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0916 10:24:42.510701   12665 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0916 10:24:42.510720   12665 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0916 10:24:42.594765   12665 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-855148" context rescaled to 1 replicas
	I0916 10:24:42.860395   12665 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0916 10:24:42.860417   12665 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0916 10:24:43.131371   12665 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0916 10:24:43.299632   12665 pod_ready.go:103] pod "etcd-addons-855148" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:44.370305   12665 pod_ready.go:93] pod "etcd-addons-855148" in "kube-system" namespace has status "Ready":"True"
	I0916 10:24:44.370326   12665 pod_ready.go:82] duration metric: took 5.581347979s for pod "etcd-addons-855148" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:44.370336   12665 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-855148" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:44.449753   12665 pod_ready.go:93] pod "kube-apiserver-addons-855148" in "kube-system" namespace has status "Ready":"True"
	I0916 10:24:44.449776   12665 pod_ready.go:82] duration metric: took 79.434223ms for pod "kube-apiserver-addons-855148" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:44.449785   12665 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-855148" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:44.822088   12665 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.92395082s)
	I0916 10:24:44.822135   12665 main.go:141] libmachine: Making call to close driver server
	I0916 10:24:44.822150   12665 main.go:141] libmachine: (addons-855148) Calling .Close
	I0916 10:24:44.822416   12665 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:24:44.822470   12665 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:24:44.822485   12665 main.go:141] libmachine: Making call to close driver server
	I0916 10:24:44.822493   12665 main.go:141] libmachine: (addons-855148) Calling .Close
	I0916 10:24:44.822472   12665 main.go:141] libmachine: (addons-855148) DBG | Closing plugin on server side
	I0916 10:24:44.822688   12665 main.go:141] libmachine: (addons-855148) DBG | Closing plugin on server side
	I0916 10:24:44.822729   12665 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:24:44.822743   12665 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:24:44.896788   12665 main.go:141] libmachine: Making call to close driver server
	I0916 10:24:44.896818   12665 main.go:141] libmachine: (addons-855148) Calling .Close
	I0916 10:24:44.897099   12665 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:24:44.897117   12665 main.go:141] libmachine: (addons-855148) DBG | Closing plugin on server side
	I0916 10:24:44.897119   12665 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:24:45.420785   12665 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0916 10:24:45.420821   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHHostname
	I0916 10:24:45.424024   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:24:45.424519   12665 main.go:141] libmachine: (addons-855148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:32:be", ip: ""} in network mk-addons-855148: {Iface:virbr1 ExpiryTime:2024-09-16 11:23:59 +0000 UTC Type:0 Mac:52:54:00:5b:32:be Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-855148 Clientid:01:52:54:00:5b:32:be}
	I0916 10:24:45.424546   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined IP address 192.168.39.55 and MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:24:45.424772   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHPort
	I0916 10:24:45.425092   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHKeyPath
	I0916 10:24:45.425270   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHUsername
	I0916 10:24:45.425405   12665 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3871/.minikube/machines/addons-855148/id_rsa Username:docker}
	I0916 10:24:46.039031   12665 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0916 10:24:46.285839   12665 addons.go:234] Setting addon gcp-auth=true in "addons-855148"
	I0916 10:24:46.285898   12665 host.go:66] Checking if "addons-855148" exists ...
	I0916 10:24:46.286344   12665 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 10:24:46.286387   12665 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:24:46.302107   12665 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32941
	I0916 10:24:46.302650   12665 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:24:46.303156   12665 main.go:141] libmachine: Using API Version  1
	I0916 10:24:46.303180   12665 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:24:46.304111   12665 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:24:46.304676   12665 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 10:24:46.304709   12665 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:24:46.320215   12665 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37873
	I0916 10:24:46.320718   12665 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:24:46.321191   12665 main.go:141] libmachine: Using API Version  1
	I0916 10:24:46.321215   12665 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:24:46.321576   12665 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:24:46.321774   12665 main.go:141] libmachine: (addons-855148) Calling .GetState
	I0916 10:24:46.323391   12665 main.go:141] libmachine: (addons-855148) Calling .DriverName
	I0916 10:24:46.323609   12665 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0916 10:24:46.323635   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHHostname
	I0916 10:24:46.327567   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:24:46.328095   12665 main.go:141] libmachine: (addons-855148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:32:be", ip: ""} in network mk-addons-855148: {Iface:virbr1 ExpiryTime:2024-09-16 11:23:59 +0000 UTC Type:0 Mac:52:54:00:5b:32:be Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-855148 Clientid:01:52:54:00:5b:32:be}
	I0916 10:24:46.328120   12665 main.go:141] libmachine: (addons-855148) DBG | domain addons-855148 has defined IP address 192.168.39.55 and MAC address 52:54:00:5b:32:be in network mk-addons-855148
	I0916 10:24:46.328312   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHPort
	I0916 10:24:46.328493   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHKeyPath
	I0916 10:24:46.328664   12665 main.go:141] libmachine: (addons-855148) Calling .GetSSHUsername
	I0916 10:24:46.328812   12665 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3871/.minikube/machines/addons-855148/id_rsa Username:docker}
	I0916 10:24:46.456635   12665 pod_ready.go:103] pod "kube-controller-manager-addons-855148" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:47.471663   12665 pod_ready.go:93] pod "kube-controller-manager-addons-855148" in "kube-system" namespace has status "Ready":"True"
	I0916 10:24:47.471691   12665 pod_ready.go:82] duration metric: took 3.02189861s for pod "kube-controller-manager-addons-855148" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:47.471703   12665 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-855148" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:47.485671   12665 pod_ready.go:93] pod "kube-scheduler-addons-855148" in "kube-system" namespace has status "Ready":"True"
	I0916 10:24:47.485706   12665 pod_ready.go:82] duration metric: took 13.984511ms for pod "kube-scheduler-addons-855148" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:47.485716   12665 pod_ready.go:39] duration metric: took 8.701981503s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:24:47.485738   12665 api_server.go:52] waiting for apiserver process to appear ...
	I0916 10:24:47.485801   12665 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:24:48.084212   12665 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.114953207s)
	I0916 10:24:48.084263   12665 main.go:141] libmachine: Making call to close driver server
	I0916 10:24:48.084276   12665 main.go:141] libmachine: (addons-855148) Calling .Close
	I0916 10:24:48.084283   12665 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (9.102242601s)
	I0916 10:24:48.084328   12665 main.go:141] libmachine: Making call to close driver server
	I0916 10:24:48.084340   12665 main.go:141] libmachine: (addons-855148) Calling .Close
	I0916 10:24:48.084412   12665 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.035378076s)
	I0916 10:24:48.084432   12665 main.go:141] libmachine: Making call to close driver server
	I0916 10:24:48.084440   12665 main.go:141] libmachine: (addons-855148) Calling .Close
	I0916 10:24:48.084451   12665 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.003137191s)
	I0916 10:24:48.084483   12665 main.go:141] libmachine: Making call to close driver server
	I0916 10:24:48.084491   12665 main.go:141] libmachine: (addons-855148) Calling .Close
	I0916 10:24:48.084789   12665 main.go:141] libmachine: (addons-855148) DBG | Closing plugin on server side
	I0916 10:24:48.084801   12665 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:24:48.084803   12665 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:24:48.084814   12665 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:24:48.084814   12665 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:24:48.084811   12665 main.go:141] libmachine: (addons-855148) DBG | Closing plugin on server side
	I0916 10:24:48.084825   12665 main.go:141] libmachine: Making call to close driver server
	I0916 10:24:48.084833   12665 main.go:141] libmachine: (addons-855148) Calling .Close
	I0916 10:24:48.084836   12665 main.go:141] libmachine: (addons-855148) DBG | Closing plugin on server side
	I0916 10:24:48.084861   12665 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:24:48.084872   12665 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:24:48.084823   12665 main.go:141] libmachine: Making call to close driver server
	I0916 10:24:48.084880   12665 main.go:141] libmachine: Making call to close driver server
	I0916 10:24:48.084892   12665 main.go:141] libmachine: (addons-855148) Calling .Close
	I0916 10:24:48.084885   12665 main.go:141] libmachine: (addons-855148) Calling .Close
	I0916 10:24:48.085017   12665 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:24:48.085045   12665 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:24:48.085054   12665 main.go:141] libmachine: Making call to close driver server
	I0916 10:24:48.085066   12665 main.go:141] libmachine: (addons-855148) Calling .Close
	I0916 10:24:48.085100   12665 main.go:141] libmachine: (addons-855148) DBG | Closing plugin on server side
	I0916 10:24:48.085127   12665 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:24:48.085133   12665 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:24:48.085199   12665 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:24:48.085208   12665 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:24:48.085431   12665 main.go:141] libmachine: (addons-855148) DBG | Closing plugin on server side
	I0916 10:24:48.085455   12665 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:24:48.085461   12665 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:24:48.085476   12665 addons.go:475] Verifying addon ingress=true in "addons-855148"
	I0916 10:24:48.085778   12665 main.go:141] libmachine: (addons-855148) DBG | Closing plugin on server side
	I0916 10:24:48.085821   12665 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:24:48.085829   12665 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:24:48.088205   12665 out.go:177] * Verifying ingress addon...
	I0916 10:24:48.090256   12665 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0916 10:24:48.102128   12665 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0916 10:24:48.102176   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:48.126941   12665 main.go:141] libmachine: Making call to close driver server
	I0916 10:24:48.126969   12665 main.go:141] libmachine: (addons-855148) Calling .Close
	I0916 10:24:48.127220   12665 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:24:48.127258   12665 main.go:141] libmachine: (addons-855148) DBG | Closing plugin on server side
	I0916 10:24:48.127267   12665 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:24:48.633217   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:49.108036   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:49.753095   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:50.131543   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:50.518608   12665 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (11.405467484s)
	I0916 10:24:50.518651   12665 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (11.197034419s)
	I0916 10:24:50.518681   12665 main.go:141] libmachine: Making call to close driver server
	I0916 10:24:50.518698   12665 main.go:141] libmachine: (addons-855148) Calling .Close
	I0916 10:24:50.518716   12665 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (11.061797023s)
	I0916 10:24:50.518681   12665 main.go:141] libmachine: Making call to close driver server
	I0916 10:24:50.518770   12665 main.go:141] libmachine: (addons-855148) Calling .Close
	I0916 10:24:50.518786   12665 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.298019924s)
	I0916 10:24:50.518812   12665 main.go:141] libmachine: Making call to close driver server
	I0916 10:24:50.518828   12665 main.go:141] libmachine: (addons-855148) Calling .Close
	I0916 10:24:50.518728   12665 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (10.64829088s)
	I0916 10:24:50.518953   12665 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (9.422217626s)
	W0916 10:24:50.518995   12665 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0916 10:24:50.518752   12665 main.go:141] libmachine: Making call to close driver server
	I0916 10:24:50.519019   12665 retry.go:31] will retry after 228.823512ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0916 10:24:50.519037   12665 main.go:141] libmachine: (addons-855148) Calling .Close
	I0916 10:24:50.519086   12665 main.go:141] libmachine: Making call to close driver server
	I0916 10:24:50.519227   12665 main.go:141] libmachine: (addons-855148) Calling .Close
	I0916 10:24:50.519100   12665 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (8.497282599s)
	I0916 10:24:50.519117   12665 main.go:141] libmachine: (addons-855148) DBG | Closing plugin on server side
	I0916 10:24:50.519295   12665 main.go:141] libmachine: Making call to close driver server
	I0916 10:24:50.519148   12665 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:24:50.519324   12665 main.go:141] libmachine: (addons-855148) Calling .Close
	I0916 10:24:50.519328   12665 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:24:50.519339   12665 main.go:141] libmachine: Making call to close driver server
	I0916 10:24:50.519346   12665 main.go:141] libmachine: (addons-855148) Calling .Close
	I0916 10:24:50.519139   12665 main.go:141] libmachine: (addons-855148) DBG | Closing plugin on server side
	I0916 10:24:50.519176   12665 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:24:50.519378   12665 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:24:50.519385   12665 main.go:141] libmachine: Making call to close driver server
	I0916 10:24:50.519391   12665 main.go:141] libmachine: (addons-855148) Calling .Close
	I0916 10:24:50.519159   12665 main.go:141] libmachine: (addons-855148) DBG | Closing plugin on server side
	I0916 10:24:50.519438   12665 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:24:50.519445   12665 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:24:50.519452   12665 main.go:141] libmachine: Making call to close driver server
	I0916 10:24:50.519458   12665 main.go:141] libmachine: (addons-855148) Calling .Close
	I0916 10:24:50.519503   12665 main.go:141] libmachine: (addons-855148) DBG | Closing plugin on server side
	I0916 10:24:50.519524   12665 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:24:50.519531   12665 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:24:50.519539   12665 main.go:141] libmachine: Making call to close driver server
	I0916 10:24:50.519546   12665 main.go:141] libmachine: (addons-855148) Calling .Close
	I0916 10:24:50.519864   12665 main.go:141] libmachine: (addons-855148) DBG | Closing plugin on server side
	I0916 10:24:50.519893   12665 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:24:50.519899   12665 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:24:50.520506   12665 main.go:141] libmachine: (addons-855148) DBG | Closing plugin on server side
	I0916 10:24:50.520536   12665 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:24:50.520546   12665 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:24:50.520554   12665 main.go:141] libmachine: Making call to close driver server
	I0916 10:24:50.520560   12665 main.go:141] libmachine: (addons-855148) Calling .Close
	I0916 10:24:50.520924   12665 main.go:141] libmachine: (addons-855148) DBG | Closing plugin on server side
	I0916 10:24:50.520955   12665 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:24:50.520967   12665 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:24:50.522081   12665 main.go:141] libmachine: (addons-855148) DBG | Closing plugin on server side
	I0916 10:24:50.522106   12665 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:24:50.522111   12665 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:24:50.522137   12665 main.go:141] libmachine: (addons-855148) DBG | Closing plugin on server side
	I0916 10:24:50.522182   12665 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:24:50.522191   12665 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:24:50.522250   12665 main.go:141] libmachine: (addons-855148) DBG | Closing plugin on server side
	I0916 10:24:50.522276   12665 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:24:50.522282   12665 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:24:50.522287   12665 addons.go:475] Verifying addon metrics-server=true in "addons-855148"
	I0916 10:24:50.522491   12665 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:24:50.522507   12665 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:24:50.522515   12665 main.go:141] libmachine: Making call to close driver server
	I0916 10:24:50.522522   12665 main.go:141] libmachine: (addons-855148) Calling .Close
	I0916 10:24:50.522708   12665 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:24:50.522724   12665 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:24:50.522734   12665 addons.go:475] Verifying addon registry=true in "addons-855148"
	I0916 10:24:50.523667   12665 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-855148 service yakd-dashboard -n yakd-dashboard
	
	I0916 10:24:50.524526   12665 out.go:177] * Verifying registry addon...
	I0916 10:24:50.526816   12665 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0916 10:24:50.589716   12665 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0916 10:24:50.589736   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:50.626918   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:50.748799   12665 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 10:24:51.042629   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:51.159284   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:51.537717   12665 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (5.214070028s)
	I0916 10:24:51.537749   12665 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.051931704s)
	I0916 10:24:51.537768   12665 api_server.go:72] duration metric: took 13.207175586s to wait for apiserver process to appear ...
	I0916 10:24:51.537774   12665 api_server.go:88] waiting for apiserver healthz status ...
	I0916 10:24:51.537795   12665 api_server.go:253] Checking apiserver healthz at https://192.168.39.55:8443/healthz ...
	I0916 10:24:51.539200   12665 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0916 10:24:51.539975   12665 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (8.408555379s)
	I0916 10:24:51.540018   12665 main.go:141] libmachine: Making call to close driver server
	I0916 10:24:51.540034   12665 main.go:141] libmachine: (addons-855148) Calling .Close
	I0916 10:24:51.540253   12665 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:24:51.540269   12665 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:24:51.540269   12665 main.go:141] libmachine: (addons-855148) DBG | Closing plugin on server side
	I0916 10:24:51.540277   12665 main.go:141] libmachine: Making call to close driver server
	I0916 10:24:51.540284   12665 main.go:141] libmachine: (addons-855148) Calling .Close
	I0916 10:24:51.540545   12665 main.go:141] libmachine: (addons-855148) DBG | Closing plugin on server side
	I0916 10:24:51.540601   12665 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:24:51.540618   12665 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:24:51.540632   12665 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-855148"
	I0916 10:24:51.541708   12665 out.go:177] * Verifying csi-hostpath-driver addon...
	I0916 10:24:51.541780   12665 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0916 10:24:51.542968   12665 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0916 10:24:51.542990   12665 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0916 10:24:51.543665   12665 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0916 10:24:51.549493   12665 api_server.go:279] https://192.168.39.55:8443/healthz returned 200:
	ok
	I0916 10:24:51.554786   12665 api_server.go:141] control plane version: v1.31.1
	I0916 10:24:51.554808   12665 api_server.go:131] duration metric: took 17.027088ms to wait for apiserver health ...
	I0916 10:24:51.554818   12665 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 10:24:51.590873   12665 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0916 10:24:51.590903   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:51.594039   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:51.598420   12665 system_pods.go:59] 19 kube-system pods found
	I0916 10:24:51.598452   12665 system_pods.go:61] "coredns-7c65d6cfc9-m4nkk" [c8895d1e-4e84-43e9-8c8f-1e7bf6728db1] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
	I0916 10:24:51.598460   12665 system_pods.go:61] "coredns-7c65d6cfc9-tv8ql" [ec390ffc-e96f-439c-b320-1740769047c8] Running
	I0916 10:24:51.598472   12665 system_pods.go:61] "csi-hostpath-attacher-0" [3369f826-d54d-4c77-b5aa-f367ecda685e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0916 10:24:51.598481   12665 system_pods.go:61] "csi-hostpath-resizer-0" [21c3faa4-8656-46e3-bba5-5d91ffaa91fb] Pending
	I0916 10:24:51.598494   12665 system_pods.go:61] "csi-hostpathplugin-9wqxk" [602638a9-9f06-4b86-8f5b-979db6b64548] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0916 10:24:51.598500   12665 system_pods.go:61] "etcd-addons-855148" [e32bae24-3818-43c9-a0de-64a7c9096364] Running
	I0916 10:24:51.598505   12665 system_pods.go:61] "kube-apiserver-addons-855148" [26bf1dab-b6d8-4b81-b9d9-d1f78824aff1] Running
	I0916 10:24:51.598508   12665 system_pods.go:61] "kube-controller-manager-addons-855148" [9b64f8bb-e096-4be7-8a98-a058acac8ad8] Running
	I0916 10:24:51.598515   12665 system_pods.go:61] "kube-ingress-dns-minikube" [663c92b5-7503-473d-8674-ab2413dc35df] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0916 10:24:51.598518   12665 system_pods.go:61] "kube-proxy-jbvrw" [dbc85b7f-1e45-4edb-8053-fc79b390ca7d] Running
	I0916 10:24:51.598524   12665 system_pods.go:61] "kube-scheduler-addons-855148" [2e870b7c-37d0-483f-9869-189cc9238f80] Running
	I0916 10:24:51.598531   12665 system_pods.go:61] "metrics-server-84c5f94fbc-85g9f" [db4177ae-2e39-4669-9705-a670a5333534] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 10:24:51.598537   12665 system_pods.go:61] "nvidia-device-plugin-daemonset-vnxj2" [5798be15-1585-4c1b-84bd-507b89b7d751] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0916 10:24:51.598545   12665 system_pods.go:61] "registry-66c9cd494c-s7jtc" [ff532941-80d3-4c2f-8fee-58e373f194d0] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0916 10:24:51.598556   12665 system_pods.go:61] "registry-proxy-sv7w9" [4332ab2c-d8b5-4a98-bd5a-54ed98d85a50] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0916 10:24:51.598569   12665 system_pods.go:61] "snapshot-controller-56fcc65765-2nwls" [4f0c1026-6549-4d33-81f5-0755e53451d3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 10:24:51.598582   12665 system_pods.go:61] "snapshot-controller-56fcc65765-w7wk9" [7ac1317e-a42f-4816-a5ff-1a6a8f7c2f9a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 10:24:51.598589   12665 system_pods.go:61] "storage-provisioner" [0b750707-7dab-426d-a8b2-f4c9ee02b31d] Running
	I0916 10:24:51.598600   12665 system_pods.go:61] "tiller-deploy-b48cc5f79-dtqxt" [1b093cc0-708b-4faa-9390-dd65d9ebc725] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0916 10:24:51.598609   12665 system_pods.go:74] duration metric: took 43.786773ms to wait for pod list to return data ...
	I0916 10:24:51.598619   12665 default_sa.go:34] waiting for default service account to be created ...
	I0916 10:24:51.607849   12665 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0916 10:24:51.607872   12665 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0916 10:24:51.619435   12665 default_sa.go:45] found service account: "default"
	I0916 10:24:51.619457   12665 default_sa.go:55] duration metric: took 20.831801ms for default service account to be created ...
	I0916 10:24:51.619465   12665 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 10:24:51.641859   12665 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0916 10:24:51.641879   12665 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0916 10:24:51.667961   12665 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0916 10:24:51.673849   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:51.680806   12665 system_pods.go:86] 19 kube-system pods found
	I0916 10:24:51.680832   12665 system_pods.go:89] "coredns-7c65d6cfc9-m4nkk" [c8895d1e-4e84-43e9-8c8f-1e7bf6728db1] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
	I0916 10:24:51.680837   12665 system_pods.go:89] "coredns-7c65d6cfc9-tv8ql" [ec390ffc-e96f-439c-b320-1740769047c8] Running
	I0916 10:24:51.680844   12665 system_pods.go:89] "csi-hostpath-attacher-0" [3369f826-d54d-4c77-b5aa-f367ecda685e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0916 10:24:51.680851   12665 system_pods.go:89] "csi-hostpath-resizer-0" [21c3faa4-8656-46e3-bba5-5d91ffaa91fb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0916 10:24:51.680858   12665 system_pods.go:89] "csi-hostpathplugin-9wqxk" [602638a9-9f06-4b86-8f5b-979db6b64548] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0916 10:24:51.680864   12665 system_pods.go:89] "etcd-addons-855148" [e32bae24-3818-43c9-a0de-64a7c9096364] Running
	I0916 10:24:51.680869   12665 system_pods.go:89] "kube-apiserver-addons-855148" [26bf1dab-b6d8-4b81-b9d9-d1f78824aff1] Running
	I0916 10:24:51.680873   12665 system_pods.go:89] "kube-controller-manager-addons-855148" [9b64f8bb-e096-4be7-8a98-a058acac8ad8] Running
	I0916 10:24:51.680882   12665 system_pods.go:89] "kube-ingress-dns-minikube" [663c92b5-7503-473d-8674-ab2413dc35df] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0916 10:24:51.680885   12665 system_pods.go:89] "kube-proxy-jbvrw" [dbc85b7f-1e45-4edb-8053-fc79b390ca7d] Running
	I0916 10:24:51.680889   12665 system_pods.go:89] "kube-scheduler-addons-855148" [2e870b7c-37d0-483f-9869-189cc9238f80] Running
	I0916 10:24:51.680894   12665 system_pods.go:89] "metrics-server-84c5f94fbc-85g9f" [db4177ae-2e39-4669-9705-a670a5333534] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 10:24:51.680901   12665 system_pods.go:89] "nvidia-device-plugin-daemonset-vnxj2" [5798be15-1585-4c1b-84bd-507b89b7d751] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0916 10:24:51.680910   12665 system_pods.go:89] "registry-66c9cd494c-s7jtc" [ff532941-80d3-4c2f-8fee-58e373f194d0] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0916 10:24:51.680915   12665 system_pods.go:89] "registry-proxy-sv7w9" [4332ab2c-d8b5-4a98-bd5a-54ed98d85a50] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0916 10:24:51.680922   12665 system_pods.go:89] "snapshot-controller-56fcc65765-2nwls" [4f0c1026-6549-4d33-81f5-0755e53451d3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 10:24:51.680927   12665 system_pods.go:89] "snapshot-controller-56fcc65765-w7wk9" [7ac1317e-a42f-4816-a5ff-1a6a8f7c2f9a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 10:24:51.680933   12665 system_pods.go:89] "storage-provisioner" [0b750707-7dab-426d-a8b2-f4c9ee02b31d] Running
	I0916 10:24:51.680938   12665 system_pods.go:89] "tiller-deploy-b48cc5f79-dtqxt" [1b093cc0-708b-4faa-9390-dd65d9ebc725] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0916 10:24:51.680947   12665 system_pods.go:126] duration metric: took 61.477581ms to wait for k8s-apps to be running ...
	I0916 10:24:51.680955   12665 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 10:24:51.680997   12665 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:24:52.032013   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:52.048273   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:52.132570   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:52.530984   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:52.548040   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:52.594286   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:52.650447   12665 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.901585215s)
	I0916 10:24:52.650509   12665 main.go:141] libmachine: Making call to close driver server
	I0916 10:24:52.650526   12665 main.go:141] libmachine: (addons-855148) Calling .Close
	I0916 10:24:52.650882   12665 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:24:52.650946   12665 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:24:52.650961   12665 main.go:141] libmachine: Making call to close driver server
	I0916 10:24:52.650975   12665 main.go:141] libmachine: (addons-855148) Calling .Close
	I0916 10:24:52.650919   12665 main.go:141] libmachine: (addons-855148) DBG | Closing plugin on server side
	I0916 10:24:52.651182   12665 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:24:52.651198   12665 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:24:53.063187   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:53.064047   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:53.083090   12665 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.402065926s)
	I0916 10:24:53.083127   12665 system_svc.go:56] duration metric: took 1.402169246s WaitForService to wait for kubelet
	I0916 10:24:53.083138   12665 kubeadm.go:582] duration metric: took 14.752543419s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:24:53.083162   12665 node_conditions.go:102] verifying NodePressure condition ...
	I0916 10:24:53.083184   12665 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.415188408s)
	I0916 10:24:53.083230   12665 main.go:141] libmachine: Making call to close driver server
	I0916 10:24:53.083264   12665 main.go:141] libmachine: (addons-855148) Calling .Close
	I0916 10:24:53.083525   12665 main.go:141] libmachine: (addons-855148) DBG | Closing plugin on server side
	I0916 10:24:53.083558   12665 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:24:53.083567   12665 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:24:53.083576   12665 main.go:141] libmachine: Making call to close driver server
	I0916 10:24:53.083585   12665 main.go:141] libmachine: (addons-855148) Calling .Close
	I0916 10:24:53.083834   12665 main.go:141] libmachine: Successfully made call to close driver server
	I0916 10:24:53.083871   12665 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 10:24:53.083906   12665 main.go:141] libmachine: (addons-855148) DBG | Closing plugin on server side
	I0916 10:24:53.085216   12665 addons.go:475] Verifying addon gcp-auth=true in "addons-855148"
	I0916 10:24:53.087178   12665 out.go:177] * Verifying gcp-auth addon...
	I0916 10:24:53.089851   12665 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0916 10:24:53.108490   12665 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0916 10:24:53.108535   12665 node_conditions.go:123] node cpu capacity is 2
	I0916 10:24:53.108550   12665 node_conditions.go:105] duration metric: took 25.38191ms to run NodePressure ...
	I0916 10:24:53.108564   12665 start.go:241] waiting for startup goroutines ...
	I0916 10:24:53.147283   12665 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0916 10:24:53.148223   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:53.530239   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:53.547752   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:53.594482   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:54.030570   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:54.047577   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:54.131690   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:54.531154   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:54.547489   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:54.594671   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:55.030602   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:55.048519   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:55.095045   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:55.530831   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:55.548731   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:55.594989   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:56.031659   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:56.048371   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:56.135178   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:56.531595   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:56.890264   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:56.890714   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:57.029999   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:57.048675   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:57.094586   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:57.530778   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:57.547894   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:57.594010   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:58.030834   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:58.048625   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:58.094820   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:58.540349   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:58.550287   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:58.596635   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:59.030938   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:59.048969   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:59.095228   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:59.531080   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:59.548388   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:59.596174   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:00.031606   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:00.048215   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:00.095599   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:00.532715   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:00.549973   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:00.595714   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:01.030944   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:01.048950   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:01.095255   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:01.530108   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:01.548696   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:01.595905   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:02.030303   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:02.047622   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:02.095366   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:02.531204   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:02.547830   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:02.595228   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:03.031439   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:03.048715   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:03.094707   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:03.531130   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:03.548304   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:03.595268   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:04.241537   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:04.242951   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:04.243533   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:04.530983   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:04.548069   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:04.595345   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:05.030579   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:05.047902   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:05.093734   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:05.530959   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:05.548570   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:05.595155   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:06.030519   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:06.048227   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:06.096730   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:06.531472   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:06.548084   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:06.594648   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:07.030613   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:07.048554   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:07.095789   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:07.530242   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:07.547693   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:07.596117   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:08.032094   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:08.048128   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:08.094424   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:08.531393   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:08.549379   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:08.595006   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:09.030724   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:09.048671   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:09.096096   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:09.530874   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:09.548903   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:09.595115   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:10.029905   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:10.048973   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:10.095126   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:10.599754   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:10.601078   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:10.601453   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:11.031394   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:11.047711   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:11.094652   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:11.530522   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:11.548350   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:11.595104   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:12.031294   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:12.047605   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:12.095052   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:12.534345   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:12.635466   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:12.636757   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:13.031616   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:13.049333   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:13.095871   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:13.531667   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:13.548818   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:13.595787   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:14.030968   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:14.048173   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:14.094319   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:14.530233   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:14.547761   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:14.595495   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:15.031551   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:15.048696   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:15.093864   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:15.531600   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:15.549719   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:15.594710   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:16.031254   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:16.048233   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:16.093940   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:16.531998   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:16.548337   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:16.594915   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:17.031203   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:17.047710   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:17.094214   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:17.530864   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:17.547688   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:17.594226   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:18.030857   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:18.046954   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:18.094172   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:18.530225   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:18.548404   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:18.593659   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:19.031740   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:19.048442   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:19.095650   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:19.530228   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:19.547544   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:19.594847   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:20.187084   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:20.187407   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:20.187521   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:20.531185   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:20.547266   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:20.594626   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:21.030276   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:21.046992   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:21.094266   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:21.530302   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:21.547398   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:21.631107   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:22.032825   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:25:22.132686   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:22.132711   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:22.530708   12665 kapi.go:107] duration metric: took 32.003889597s to wait for kubernetes.io/minikube-addons=registry ...
	I0916 10:25:22.547714   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:22.594202   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:23.048563   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:23.126203   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:23.550333   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:23.595056   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:24.050655   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:24.096197   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:24.548543   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:24.594215   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:25.047625   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:25.094530   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:25.547993   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:25.595279   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:26.047356   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:26.094536   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:26.548349   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:26.594882   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:27.048628   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:27.316195   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:27.547881   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:27.595060   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:28.048917   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:28.094215   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:28.547692   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:28.595202   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:29.048429   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:29.096054   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:29.548071   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:29.594307   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:30.047754   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:30.094703   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:30.547696   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:30.594571   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:31.048516   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:31.094225   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:31.547717   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:31.594119   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:32.048576   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:32.094611   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:32.550358   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:32.594505   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:33.048145   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:33.094788   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:33.549138   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:33.594858   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:34.048961   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:34.108169   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:34.553878   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:34.599904   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:35.054222   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:35.097140   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:35.556858   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:35.598042   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:36.127759   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:36.130197   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:36.554592   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:36.651883   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:37.048964   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:37.094642   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:37.548893   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:37.595633   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:38.049449   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:38.150683   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:38.550208   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:38.594627   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:39.049370   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:39.094833   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:39.548239   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:39.594632   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:40.049615   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:40.094495   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:40.547922   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:40.594242   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:41.053671   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:41.154394   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:41.548926   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:41.595064   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:42.054927   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:42.154247   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:42.549169   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:42.594560   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:43.049601   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:43.095327   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:43.547736   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:43.594737   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:44.048366   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:44.095490   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:44.548736   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:44.595721   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:45.049238   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:45.149271   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:45.549079   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:45.594664   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:46.049305   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:46.094469   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:46.548431   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:46.595525   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:47.048432   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:47.094555   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:47.547513   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:47.595626   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:48.055094   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:48.151521   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:48.547531   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:48.594530   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:49.047776   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:49.094383   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:49.548151   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:49.648136   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:50.048215   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:50.093822   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:50.548033   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:50.593625   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:51.049559   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:51.094395   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:51.548198   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:51.594644   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:52.048135   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:52.094209   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:52.564122   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:52.606432   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:53.051688   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:53.094948   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:53.551107   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:53.595096   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:54.053681   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:54.151137   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:54.548586   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:54.595110   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:55.048037   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:55.094006   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:55.548620   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:55.594757   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:56.051871   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:56.094284   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:56.611623   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:56.611753   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:57.048019   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:57.093817   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:57.549113   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:57.594598   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:58.047829   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:58.095122   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:58.548426   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:58.595047   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:59.048036   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:59.094236   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:59.548867   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:59.594799   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:26:00.048986   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:26:00.094142   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:26:00.548782   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:26:00.649436   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:26:01.050527   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:26:01.097273   12665 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:26:01.550141   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:26:01.595282   12665 kapi.go:107] duration metric: took 1m13.505021013s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0916 10:26:02.152203   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:26:02.548733   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:26:03.049975   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:26:03.548262   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:26:04.048941   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:26:04.548197   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:26:05.086778   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:26:05.548974   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:26:06.048716   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:26:06.604921   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:26:07.048649   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:26:07.548731   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:26:08.048421   12665 kapi.go:107] duration metric: took 1m16.504751203s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0916 10:26:15.107691   12665 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0916 10:26:15.107715   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:15.593886   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:16.093248   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:16.593628   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:17.093747   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:17.593546   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:18.093447   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:18.593925   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:19.093857   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:19.593574   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:20.093798   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:20.593902   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:21.092881   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:21.593173   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:22.093775   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:22.593957   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:23.094080   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:23.593653   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:24.094319   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:24.594088   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:25.093216   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:25.593830   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:26.092993   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:26.593541   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:27.093504   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:27.594435   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:28.093485   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:28.594388   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:29.093448   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:29.594133   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:30.093691   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:30.593761   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:31.093012   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:31.593517   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:32.093788   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:32.593163   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:33.093811   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:33.593428   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:34.093726   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:34.594362   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:35.093488   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:35.594005   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:36.093468   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:36.593871   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:37.093205   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:37.593839   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:38.093220   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:38.597698   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:39.093972   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:39.594457   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:40.093423   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:40.593521   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:41.093943   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:41.593386   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:42.093967   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:42.594956   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:43.093459   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:43.593838   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:44.093121   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:44.594199   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:45.093591   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:45.594390   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:46.092880   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:46.593072   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:47.093412   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:47.593769   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:48.093767   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:48.594399   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:49.093659   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:49.594650   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:50.093695   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:50.593502   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:51.093889   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:51.593198   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:52.093492   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:52.594069   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:53.093737   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:53.594221   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:54.093432   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:54.599945   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:55.093867   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:55.594417   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:56.096539   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:56.594040   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:57.093400   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:57.593763   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:58.093810   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:58.593635   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:59.094036   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:59.593686   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:00.094170   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:00.594536   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:01.093700   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:01.594323   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:02.093892   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:02.593458   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:03.094035   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:03.593608   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:04.094396   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:04.593916   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:05.094982   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:05.593515   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:06.093703   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:06.593928   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:07.094008   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:07.593514   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:08.094475   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:08.594258   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:09.093759   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:09.594333   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:10.093827   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:10.593969   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:11.093033   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:11.593440   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:12.093858   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:12.594187   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:13.094011   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:13.593411   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:14.093783   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:14.593534   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:15.094207   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:15.593927   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:16.093658   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:16.593119   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:17.093793   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:17.593352   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:18.094799   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:18.593116   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:19.093460   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:19.594185   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:20.093221   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:20.592678   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:21.092474   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:21.593494   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:22.093420   12665 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:22.594521   12665 kapi.go:107] duration metric: took 2m29.504663084s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0916 10:27:22.596009   12665 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-855148 cluster.
	I0916 10:27:22.597280   12665 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0916 10:27:22.598312   12665 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0916 10:27:22.599520   12665 out.go:177] * Enabled addons: nvidia-device-plugin, ingress-dns, storage-provisioner-rancher, cloud-spanner, storage-provisioner, default-storageclass, helm-tiller, volcano, inspektor-gadget, metrics-server, yakd, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0916 10:27:22.600620   12665 addons.go:510] duration metric: took 2m44.27009229s for enable addons: enabled=[nvidia-device-plugin ingress-dns storage-provisioner-rancher cloud-spanner storage-provisioner default-storageclass helm-tiller volcano inspektor-gadget metrics-server yakd volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0916 10:27:22.600666   12665 start.go:246] waiting for cluster config update ...
	I0916 10:27:22.600687   12665 start.go:255] writing updated cluster config ...
	I0916 10:27:22.600999   12665 ssh_runner.go:195] Run: rm -f paused
	I0916 10:27:22.656964   12665 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0916 10:27:22.658488   12665 out.go:177] * Done! kubectl is now configured to use "addons-855148" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 16 10:37:10 addons-855148 dockerd[1196]: time="2024-09-16T10:37:10.942141097Z" level=info msg="shim disconnected" id=08d4072e8d3fe1f5cf3730c2b55d4e18e1399a0fdc4ad70c1fa4816fb012fba2 namespace=moby
	Sep 16 10:37:10 addons-855148 dockerd[1196]: time="2024-09-16T10:37:10.942463291Z" level=warning msg="cleaning up after shim disconnected" id=08d4072e8d3fe1f5cf3730c2b55d4e18e1399a0fdc4ad70c1fa4816fb012fba2 namespace=moby
	Sep 16 10:37:10 addons-855148 dockerd[1196]: time="2024-09-16T10:37:10.942601906Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 16 10:37:17 addons-855148 dockerd[1190]: time="2024-09-16T10:37:17.141474967Z" level=info msg="ignoring event" container=b3e98bf588c4dc2824e54aa318f359ea6dcd9cda6cba7235b387a40b0da5fd6f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:37:17 addons-855148 dockerd[1196]: time="2024-09-16T10:37:17.142044167Z" level=info msg="shim disconnected" id=b3e98bf588c4dc2824e54aa318f359ea6dcd9cda6cba7235b387a40b0da5fd6f namespace=moby
	Sep 16 10:37:17 addons-855148 dockerd[1196]: time="2024-09-16T10:37:17.143002749Z" level=warning msg="cleaning up after shim disconnected" id=b3e98bf588c4dc2824e54aa318f359ea6dcd9cda6cba7235b387a40b0da5fd6f namespace=moby
	Sep 16 10:37:17 addons-855148 dockerd[1196]: time="2024-09-16T10:37:17.143306653Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 16 10:37:17 addons-855148 dockerd[1196]: time="2024-09-16T10:37:17.587479088Z" level=info msg="shim disconnected" id=3c737257dfc48b2888c504fae0168c95710dcf9ff50c9a7dd9ce75c9e0478bbb namespace=moby
	Sep 16 10:37:17 addons-855148 dockerd[1196]: time="2024-09-16T10:37:17.587556518Z" level=warning msg="cleaning up after shim disconnected" id=3c737257dfc48b2888c504fae0168c95710dcf9ff50c9a7dd9ce75c9e0478bbb namespace=moby
	Sep 16 10:37:17 addons-855148 dockerd[1196]: time="2024-09-16T10:37:17.587566089Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 16 10:37:17 addons-855148 dockerd[1190]: time="2024-09-16T10:37:17.588330971Z" level=info msg="ignoring event" container=3c737257dfc48b2888c504fae0168c95710dcf9ff50c9a7dd9ce75c9e0478bbb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:37:17 addons-855148 dockerd[1196]: time="2024-09-16T10:37:17.639417625Z" level=warning msg="cleanup warnings time=\"2024-09-16T10:37:17Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Sep 16 10:37:17 addons-855148 dockerd[1196]: time="2024-09-16T10:37:17.667681930Z" level=info msg="shim disconnected" id=0fec38a4a31bb83eca80890a7b4efc7cc20c366db267953e9b9235fb5336ab06 namespace=moby
	Sep 16 10:37:17 addons-855148 dockerd[1196]: time="2024-09-16T10:37:17.667778828Z" level=warning msg="cleaning up after shim disconnected" id=0fec38a4a31bb83eca80890a7b4efc7cc20c366db267953e9b9235fb5336ab06 namespace=moby
	Sep 16 10:37:17 addons-855148 dockerd[1196]: time="2024-09-16T10:37:17.667789286Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 16 10:37:17 addons-855148 dockerd[1190]: time="2024-09-16T10:37:17.668515950Z" level=info msg="ignoring event" container=0fec38a4a31bb83eca80890a7b4efc7cc20c366db267953e9b9235fb5336ab06 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:37:17 addons-855148 dockerd[1190]: time="2024-09-16T10:37:17.764566104Z" level=info msg="ignoring event" container=5b0f5cdcf8e71b9c49112b8b57ffa40796e662ffd2ab44e910731bac10ca0fe7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:37:17 addons-855148 dockerd[1196]: time="2024-09-16T10:37:17.765790026Z" level=info msg="shim disconnected" id=5b0f5cdcf8e71b9c49112b8b57ffa40796e662ffd2ab44e910731bac10ca0fe7 namespace=moby
	Sep 16 10:37:17 addons-855148 dockerd[1196]: time="2024-09-16T10:37:17.765991883Z" level=warning msg="cleaning up after shim disconnected" id=5b0f5cdcf8e71b9c49112b8b57ffa40796e662ffd2ab44e910731bac10ca0fe7 namespace=moby
	Sep 16 10:37:17 addons-855148 dockerd[1196]: time="2024-09-16T10:37:17.766575752Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 16 10:37:17 addons-855148 dockerd[1196]: time="2024-09-16T10:37:17.866217847Z" level=info msg="shim disconnected" id=a072ac2e093e87d68d31649ef26cf6d8259186c1613a3cae634eafea5b4d0ed7 namespace=moby
	Sep 16 10:37:17 addons-855148 dockerd[1196]: time="2024-09-16T10:37:17.866426129Z" level=warning msg="cleaning up after shim disconnected" id=a072ac2e093e87d68d31649ef26cf6d8259186c1613a3cae634eafea5b4d0ed7 namespace=moby
	Sep 16 10:37:17 addons-855148 dockerd[1196]: time="2024-09-16T10:37:17.866523043Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 16 10:37:17 addons-855148 dockerd[1190]: time="2024-09-16T10:37:17.867742023Z" level=info msg="ignoring event" container=a072ac2e093e87d68d31649ef26cf6d8259186c1613a3cae634eafea5b4d0ed7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:37:17 addons-855148 dockerd[1196]: time="2024-09-16T10:37:17.887056829Z" level=warning msg="cleanup warnings time=\"2024-09-16T10:37:17Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	cb32281a57974       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                                  11 seconds ago      Running             hello-world-app           0                   02e7b366c78e8       hello-world-app-55bf9c44b4-5rlr7
	68d5c6ab7b364       nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf                                                21 seconds ago      Running             nginx                     0                   ab9e6910102d1       nginx
	1a27b8836156c       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 9 minutes ago       Running             gcp-auth                  0                   295b1656f6020       gcp-auth-89d5ffd79-d5nc2
	52b9e81843d3b       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              patch                     0                   381e0597e261d       ingress-nginx-admission-patch-pxqx8
	892a33fba3a7d       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              create                    0                   a5cda1d301251       ingress-nginx-admission-create-srlqz
	0fbef90405e13       6e38f40d628db                                                                                                                12 minutes ago      Running             storage-provisioner       0                   7bf5c38854672       storage-provisioner
	d68e12be91533       c69fa2e9cbf5f                                                                                                                12 minutes ago      Running             coredns                   0                   a7e55cacdae9d       coredns-7c65d6cfc9-tv8ql
	64580a888353e       60c005f310ff3                                                                                                                12 minutes ago      Running             kube-proxy                0                   d5ae5b4c1adc7       kube-proxy-jbvrw
	dcbe37287d4d5       2e96e5913fc06                                                                                                                12 minutes ago      Running             etcd                      0                   567eb70ef232f       etcd-addons-855148
	73bfc1d887f6b       9aa1fad941575                                                                                                                12 minutes ago      Running             kube-scheduler            0                   81d3fa93f42f4       kube-scheduler-addons-855148
	5d02005c877e4       175ffd71cce3d                                                                                                                12 minutes ago      Running             kube-controller-manager   0                   2cf20a9c2d095       kube-controller-manager-addons-855148
	3a9eef4845884       6bab7719df100                                                                                                                12 minutes ago      Running             kube-apiserver            0                   51a594cb6f4d3       kube-apiserver-addons-855148
	
	
	==> coredns [d68e12be9153] <==
	[INFO] 10.244.0.22:58230 - 2320 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000300144s
	[INFO] 10.244.0.22:46880 - 57945 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000088241s
	[INFO] 10.244.0.22:58230 - 48798 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000102655s
	[INFO] 10.244.0.22:46880 - 35492 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00010925s
	[INFO] 10.244.0.22:58230 - 47784 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000180918s
	[INFO] 10.244.0.22:46880 - 65531 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000111319s
	[INFO] 10.244.0.22:46880 - 62260 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000085565s
	[INFO] 10.244.0.22:58230 - 52254 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000230389s
	[INFO] 10.244.0.22:46880 - 56289 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00004532s
	[INFO] 10.244.0.22:46880 - 52673 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000139662s
	[INFO] 10.244.0.22:46880 - 33771 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000110717s
	[INFO] 10.244.0.22:33527 - 14757 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000082321s
	[INFO] 10.244.0.22:58007 - 5958 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000182518s
	[INFO] 10.244.0.22:33527 - 13337 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000088494s
	[INFO] 10.244.0.22:58007 - 15903 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00026716s
	[INFO] 10.244.0.22:33527 - 4170 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000115213s
	[INFO] 10.244.0.22:33527 - 53434 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000275463s
	[INFO] 10.244.0.22:33527 - 52730 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000133408s
	[INFO] 10.244.0.22:33527 - 6463 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00011091s
	[INFO] 10.244.0.22:33527 - 18821 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000343265s
	[INFO] 10.244.0.22:58007 - 61169 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000078185s
	[INFO] 10.244.0.22:58007 - 28659 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000118561s
	[INFO] 10.244.0.22:58007 - 61182 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000069893s
	[INFO] 10.244.0.22:58007 - 53022 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000090158s
	[INFO] 10.244.0.22:58007 - 27058 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000082005s
	
	
	==> describe nodes <==
	Name:               addons-855148
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-855148
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=addons-855148
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T10_24_34_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-855148
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:24:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-855148
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:37:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:37:07 +0000   Mon, 16 Sep 2024 10:24:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:37:07 +0000   Mon, 16 Sep 2024 10:24:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:37:07 +0000   Mon, 16 Sep 2024 10:24:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:37:07 +0000   Mon, 16 Sep 2024 10:24:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.55
	  Hostname:    addons-855148
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 e575835ae69443b8bf515a604d9fcfbc
	  System UUID:                e575835a-e694-43b8-bf51-5a604d9fcfbc
	  Boot ID:                    4c095d5a-3bef-45b4-aadf-86828f7e0c3c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m15s
	  default                     hello-world-app-55bf9c44b4-5rlr7         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  gcp-auth                    gcp-auth-89d5ffd79-d5nc2                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-7c65d6cfc9-tv8ql                 100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 etcd-addons-855148                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-855148             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-855148    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-jbvrw                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-855148             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node addons-855148 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node addons-855148 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node addons-855148 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node addons-855148 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node addons-855148 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet          Node addons-855148 status is now: NodeHasSufficientPID
	  Normal  NodeReady                12m                kubelet          Node addons-855148 status is now: NodeReady
	  Normal  RegisteredNode           12m                node-controller  Node addons-855148 event: Registered Node addons-855148 in Controller
	
	
	==> dmesg <==
	[  +5.184809] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.276214] kauditd_printk_skb: 71 callbacks suppressed
	[Sep16 10:26] kauditd_printk_skb: 1 callbacks suppressed
	[  +6.067520] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.606724] kauditd_printk_skb: 33 callbacks suppressed
	[ +44.194407] kauditd_printk_skb: 28 callbacks suppressed
	[Sep16 10:27] kauditd_printk_skb: 40 callbacks suppressed
	[ +13.178923] kauditd_printk_skb: 9 callbacks suppressed
	[  +8.684147] kauditd_printk_skb: 28 callbacks suppressed
	[  +6.505434] kauditd_printk_skb: 2 callbacks suppressed
	[Sep16 10:28] kauditd_printk_skb: 20 callbacks suppressed
	[ +20.234590] kauditd_printk_skb: 21 callbacks suppressed
	[Sep16 10:31] kauditd_printk_skb: 28 callbacks suppressed
	[Sep16 10:36] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.682549] kauditd_printk_skb: 31 callbacks suppressed
	[  +5.040762] kauditd_printk_skb: 36 callbacks suppressed
	[  +9.744737] kauditd_printk_skb: 39 callbacks suppressed
	[  +6.431021] kauditd_printk_skb: 19 callbacks suppressed
	[  +8.647908] kauditd_printk_skb: 25 callbacks suppressed
	[  +5.567675] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.001392] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.025556] kauditd_printk_skb: 31 callbacks suppressed
	[Sep16 10:37] kauditd_printk_skb: 8 callbacks suppressed
	[  +6.143352] kauditd_printk_skb: 37 callbacks suppressed
	[  +6.246662] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [dcbe37287d4d] <==
	{"level":"info","ts":"2024-09-16T10:25:32.393933Z","caller":"traceutil/trace.go:171","msg":"trace[1062171263] range","detail":"{range_begin:/registry/cronjobs/; range_end:/registry/cronjobs0; response_count:0; response_revision:1118; }","duration":"113.679807ms","start":"2024-09-16T10:25:32.280206Z","end":"2024-09-16T10:25:32.393886Z","steps":["trace[1062171263] 'count revisions from in-memory index tree'  (duration: 113.410895ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:25:42.034320Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.637733ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T10:25:42.034393Z","caller":"traceutil/trace.go:171","msg":"trace[820863475] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1186; }","duration":"123.720097ms","start":"2024-09-16T10:25:41.910661Z","end":"2024-09-16T10:25:42.034381Z","steps":["trace[820863475] 'range keys from in-memory index tree'  (duration: 123.596468ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:25:48.018053Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.906262ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T10:25:48.018350Z","caller":"traceutil/trace.go:171","msg":"trace[480150165] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1214; }","duration":"108.136573ms","start":"2024-09-16T10:25:47.910119Z","end":"2024-09-16T10:25:48.018256Z","steps":["trace[480150165] 'range keys from in-memory index tree'  (duration: 107.841659ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:25:56.581838Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"426.97955ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/gcp-auth/gcp-auth-certs-patch.17f5b27f8d3a70cd\" ","response":"range_response_count:1 size:914"}
	{"level":"info","ts":"2024-09-16T10:25:56.582205Z","caller":"traceutil/trace.go:171","msg":"trace[1876587072] range","detail":"{range_begin:/registry/events/gcp-auth/gcp-auth-certs-patch.17f5b27f8d3a70cd; range_end:; response_count:1; response_revision:1260; }","duration":"427.352235ms","start":"2024-09-16T10:25:56.154829Z","end":"2024-09-16T10:25:56.582182Z","steps":["trace[1876587072] 'range keys from in-memory index tree'  (duration: 426.76737ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:25:56.582292Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-16T10:25:56.154792Z","time spent":"427.478129ms","remote":"127.0.0.1:46252","response type":"/etcdserverpb.KV/Range","request count":0,"request size":65,"response count":1,"response size":937,"request content":"key:\"/registry/events/gcp-auth/gcp-auth-certs-patch.17f5b27f8d3a70cd\" "}
	{"level":"warn","ts":"2024-09-16T10:25:56.582485Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"413.895969ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/gadget/gadget-6fj6x.17f5b28dff83df9f\" ","response":"range_response_count:1 size:779"}
	{"level":"info","ts":"2024-09-16T10:25:56.582530Z","caller":"traceutil/trace.go:171","msg":"trace[180970352] range","detail":"{range_begin:/registry/events/gadget/gadget-6fj6x.17f5b28dff83df9f; range_end:; response_count:1; response_revision:1260; }","duration":"413.942885ms","start":"2024-09-16T10:25:56.168581Z","end":"2024-09-16T10:25:56.582524Z","steps":["trace[180970352] 'range keys from in-memory index tree'  (duration: 413.76683ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:25:56.582589Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-16T10:25:56.168537Z","time spent":"414.012023ms","remote":"127.0.0.1:46252","response type":"/etcdserverpb.KV/Range","request count":0,"request size":55,"response count":1,"response size":802,"request content":"key:\"/registry/events/gadget/gadget-6fj6x.17f5b28dff83df9f\" "}
	{"level":"warn","ts":"2024-09-16T10:25:56.582741Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"412.700854ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gadget/gadget-6fj6x\" ","response":"range_response_count:1 size:12474"}
	{"level":"info","ts":"2024-09-16T10:25:56.582772Z","caller":"traceutil/trace.go:171","msg":"trace[1843346109] range","detail":"{range_begin:/registry/pods/gadget/gadget-6fj6x; range_end:; response_count:1; response_revision:1260; }","duration":"412.731429ms","start":"2024-09-16T10:25:56.170035Z","end":"2024-09-16T10:25:56.582766Z","steps":["trace[1843346109] 'range keys from in-memory index tree'  (duration: 412.601885ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:25:56.582789Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-16T10:25:56.170005Z","time spent":"412.778962ms","remote":"127.0.0.1:46388","response type":"/etcdserverpb.KV/Range","request count":0,"request size":36,"response count":1,"response size":12497,"request content":"key:\"/registry/pods/gadget/gadget-6fj6x\" "}
	{"level":"info","ts":"2024-09-16T10:26:05.057609Z","caller":"traceutil/trace.go:171","msg":"trace[1461507054] transaction","detail":"{read_only:false; response_revision:1313; number_of_response:1; }","duration":"102.404349ms","start":"2024-09-16T10:26:04.955178Z","end":"2024-09-16T10:26:05.057583Z","steps":["trace[1461507054] 'process raft request'  (duration: 37.309918ms)","trace[1461507054] 'compare'  (duration: 64.836787ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-16T10:27:21.883153Z","caller":"traceutil/trace.go:171","msg":"trace[2061510529] transaction","detail":"{read_only:false; response_revision:1508; number_of_response:1; }","duration":"104.335994ms","start":"2024-09-16T10:27:21.778790Z","end":"2024-09-16T10:27:21.883126Z","steps":["trace[2061510529] 'process raft request'  (duration: 104.206155ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:27:47.379204Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"150.884749ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11925973792738012504 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" mod_revision:1584 > success:<request_put:<key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" value_size:423 >> failure:<request_range:<key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-09-16T10:27:47.379309Z","caller":"traceutil/trace.go:171","msg":"trace[1066380528] transaction","detail":"{read_only:false; response_revision:1603; number_of_response:1; }","duration":"232.608957ms","start":"2024-09-16T10:27:47.146684Z","end":"2024-09-16T10:27:47.379293Z","steps":["trace[1066380528] 'process raft request'  (duration: 81.261167ms)","trace[1066380528] 'compare'  (duration: 150.79126ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-16T10:34:30.032575Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1917}
	{"level":"info","ts":"2024-09-16T10:34:30.124296Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1917,"took":"91.053327ms","hash":1787768890,"current-db-size-bytes":9306112,"current-db-size":"9.3 MB","current-db-size-in-use-bytes":5062656,"current-db-size-in-use":"5.1 MB"}
	{"level":"info","ts":"2024-09-16T10:34:30.124430Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1787768890,"revision":1917,"compact-revision":-1}
	{"level":"info","ts":"2024-09-16T10:36:12.091155Z","caller":"traceutil/trace.go:171","msg":"trace[503189620] transaction","detail":"{read_only:false; response_revision:2536; number_of_response:1; }","duration":"124.097916ms","start":"2024-09-16T10:36:11.967031Z","end":"2024-09-16T10:36:12.091129Z","steps":["trace[503189620] 'process raft request'  (duration: 123.640101ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:36:12.091671Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.159701ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T10:36:12.091715Z","caller":"traceutil/trace.go:171","msg":"trace[1718098908] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:2536; }","duration":"108.277655ms","start":"2024-09-16T10:36:11.983424Z","end":"2024-09-16T10:36:12.091702Z","steps":["trace[1718098908] 'agreement among raft nodes before linearized reading'  (duration: 108.116443ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:36:12.092549Z","caller":"traceutil/trace.go:171","msg":"trace[451909435] linearizableReadLoop","detail":"{readStateIndex:2705; appliedIndex:2704; }","duration":"107.333905ms","start":"2024-09-16T10:36:11.983430Z","end":"2024-09-16T10:36:12.090764Z","steps":["trace[451909435] 'read index received'  (duration: 107.2004ms)","trace[451909435] 'applied index is now lower than readState.Index'  (duration: 133.119µs)"],"step_count":2}
	
	
	==> gcp-auth [1a27b8836156] <==
	2024/09/16 10:28:03 Ready to write response ...
	2024/09/16 10:36:06 Ready to marshal response ...
	2024/09/16 10:36:06 Ready to write response ...
	2024/09/16 10:36:06 Ready to marshal response ...
	2024/09/16 10:36:06 Ready to write response ...
	2024/09/16 10:36:06 Ready to marshal response ...
	2024/09/16 10:36:06 Ready to write response ...
	2024/09/16 10:36:06 Ready to marshal response ...
	2024/09/16 10:36:06 Ready to write response ...
	2024/09/16 10:36:06 Ready to marshal response ...
	2024/09/16 10:36:06 Ready to write response ...
	2024/09/16 10:36:17 Ready to marshal response ...
	2024/09/16 10:36:17 Ready to write response ...
	2024/09/16 10:36:17 Ready to marshal response ...
	2024/09/16 10:36:17 Ready to write response ...
	2024/09/16 10:36:26 Ready to marshal response ...
	2024/09/16 10:36:26 Ready to write response ...
	2024/09/16 10:36:30 Ready to marshal response ...
	2024/09/16 10:36:30 Ready to write response ...
	2024/09/16 10:36:42 Ready to marshal response ...
	2024/09/16 10:36:42 Ready to write response ...
	2024/09/16 10:36:53 Ready to marshal response ...
	2024/09/16 10:36:53 Ready to write response ...
	2024/09/16 10:37:05 Ready to marshal response ...
	2024/09/16 10:37:05 Ready to write response ...
	
	
	==> kernel <==
	 10:37:18 up 13 min,  0 users,  load average: 2.13, 1.16, 0.76
	Linux addons-855148 5.10.207 #1 SMP Sun Sep 15 20:39:46 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [3a9eef484588] <==
	W0916 10:27:55.693270       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0916 10:27:55.698483       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0916 10:27:56.224669       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0916 10:27:56.258516       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0916 10:36:06.777716       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.105.48.190"}
	E0916 10:36:33.729726       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0916 10:36:35.845640       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0916 10:36:47.768810       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0916 10:36:48.908068       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0916 10:36:53.597237       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0916 10:36:53.806247       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.105.193.221"}
	I0916 10:36:58.914495       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0916 10:36:58.914602       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0916 10:36:58.936156       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0916 10:36:58.936377       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0916 10:36:58.984145       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0916 10:36:58.984556       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0916 10:36:59.043821       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0916 10:36:59.044015       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0916 10:36:59.070497       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0916 10:36:59.070546       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0916 10:37:00.048938       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0916 10:37:00.072032       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0916 10:37:00.077125       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	I0916 10:37:05.294482       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.111.223.140"}
	
	
	==> kube-controller-manager [5d02005c877e] <==
	W0916 10:37:06.513434       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:37:06.513594       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0916 10:37:07.561643       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-855148"
	I0916 10:37:07.736566       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0916 10:37:07.742118       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="6.395µs"
	I0916 10:37:07.746975       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	W0916 10:37:07.883860       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:37:07.884016       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0916 10:37:08.132979       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="9.656875ms"
	I0916 10:37:08.133875       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="33.368µs"
	I0916 10:37:08.682212       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0916 10:37:08.682258       1 shared_informer.go:320] Caches are synced for resource quota
	W0916 10:37:08.952831       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:37:08.952876       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0916 10:37:09.073436       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0916 10:37:09.073597       1 shared_informer.go:320] Caches are synced for garbage collector
	W0916 10:37:09.770076       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:37:09.770160       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 10:37:09.801639       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:37:09.801727       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0916 10:37:14.785624       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	W0916 10:37:16.632308       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:37:16.632368       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0916 10:37:17.518618       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="2.985µs"
	I0916 10:37:17.795581       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="ingress-nginx"
	
	
	==> kube-proxy [64580a888353] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0916 10:24:40.379868       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0916 10:24:40.401829       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.55"]
	E0916 10:24:40.402028       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:24:40.508035       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0916 10:24:40.508073       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0916 10:24:40.508111       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:24:40.548340       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:24:40.548670       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:24:40.548683       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:24:40.550021       1 config.go:199] "Starting service config controller"
	I0916 10:24:40.550040       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:24:40.550073       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:24:40.550077       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:24:40.551816       1 config.go:328] "Starting node config controller"
	I0916 10:24:40.551831       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:24:40.651158       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 10:24:40.651211       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:24:40.651867       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [73bfc1d887f6] <==
	E0916 10:24:31.413804       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:24:31.412490       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 10:24:31.413962       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:24:31.412529       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 10:24:31.413997       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:24:31.412573       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0916 10:24:31.414109       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0916 10:24:31.414125       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:24:32.237440       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 10:24:32.237484       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:24:32.255056       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0916 10:24:32.255103       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:24:32.327755       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 10:24:32.327809       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:24:32.432199       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0916 10:24:32.432246       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:24:32.507629       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0916 10:24:32.507895       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:24:32.527357       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 10:24:32.527529       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:24:32.607279       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 10:24:32.607457       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:24:32.786665       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 10:24:32.786711       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0916 10:24:35.084767       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 10:37:11 addons-855148 kubelet[1973]: I0916 10:37:11.928669    1973 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="587d8e17-7e76-48aa-8bc0-043ef496c636" path="/var/lib/kubelet/pods/587d8e17-7e76-48aa-8bc0-043ef496c636/volumes"
	Sep 16 10:37:13 addons-855148 kubelet[1973]: E0916 10:37:13.924966    1973 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="74249c41-9cde-4a11-8cdb-fb8607e89d57"
	Sep 16 10:37:14 addons-855148 kubelet[1973]: E0916 10:37:14.924072    1973 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox\\\"\"" pod="default/registry-test" podUID="b0df928f-2218-475a-9940-df775ef8e077"
	Sep 16 10:37:14 addons-855148 kubelet[1973]: I0916 10:37:14.933528    1973 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-55bf9c44b4-5rlr7" podStartSLOduration=8.26025698 podStartE2EDuration="9.933487275s" podCreationTimestamp="2024-09-16 10:37:05 +0000 UTC" firstStartedPulling="2024-09-16 10:37:05.781790628 +0000 UTC m=+751.996329770" lastFinishedPulling="2024-09-16 10:37:07.455020922 +0000 UTC m=+753.669560065" observedRunningTime="2024-09-16 10:37:08.123276188 +0000 UTC m=+754.337815347" watchObservedRunningTime="2024-09-16 10:37:14.933487275 +0000 UTC m=+761.148026437"
	Sep 16 10:37:17 addons-855148 kubelet[1973]: I0916 10:37:17.299601    1973 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/b0df928f-2218-475a-9940-df775ef8e077-gcp-creds\") pod \"b0df928f-2218-475a-9940-df775ef8e077\" (UID: \"b0df928f-2218-475a-9940-df775ef8e077\") "
	Sep 16 10:37:17 addons-855148 kubelet[1973]: I0916 10:37:17.299656    1973 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-745hw\" (UniqueName: \"kubernetes.io/projected/b0df928f-2218-475a-9940-df775ef8e077-kube-api-access-745hw\") pod \"b0df928f-2218-475a-9940-df775ef8e077\" (UID: \"b0df928f-2218-475a-9940-df775ef8e077\") "
	Sep 16 10:37:17 addons-855148 kubelet[1973]: I0916 10:37:17.300031    1973 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0df928f-2218-475a-9940-df775ef8e077-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "b0df928f-2218-475a-9940-df775ef8e077" (UID: "b0df928f-2218-475a-9940-df775ef8e077"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 16 10:37:17 addons-855148 kubelet[1973]: I0916 10:37:17.308336    1973 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0df928f-2218-475a-9940-df775ef8e077-kube-api-access-745hw" (OuterVolumeSpecName: "kube-api-access-745hw") pod "b0df928f-2218-475a-9940-df775ef8e077" (UID: "b0df928f-2218-475a-9940-df775ef8e077"). InnerVolumeSpecName "kube-api-access-745hw". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 16 10:37:17 addons-855148 kubelet[1973]: I0916 10:37:17.400834    1973 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/b0df928f-2218-475a-9940-df775ef8e077-gcp-creds\") on node \"addons-855148\" DevicePath \"\""
	Sep 16 10:37:17 addons-855148 kubelet[1973]: I0916 10:37:17.400882    1973 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-745hw\" (UniqueName: \"kubernetes.io/projected/b0df928f-2218-475a-9940-df775ef8e077-kube-api-access-745hw\") on node \"addons-855148\" DevicePath \"\""
	Sep 16 10:37:17 addons-855148 kubelet[1973]: I0916 10:37:17.904508    1973 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57fr9\" (UniqueName: \"kubernetes.io/projected/ff532941-80d3-4c2f-8fee-58e373f194d0-kube-api-access-57fr9\") pod \"ff532941-80d3-4c2f-8fee-58e373f194d0\" (UID: \"ff532941-80d3-4c2f-8fee-58e373f194d0\") "
	Sep 16 10:37:17 addons-855148 kubelet[1973]: I0916 10:37:17.913248    1973 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff532941-80d3-4c2f-8fee-58e373f194d0-kube-api-access-57fr9" (OuterVolumeSpecName: "kube-api-access-57fr9") pod "ff532941-80d3-4c2f-8fee-58e373f194d0" (UID: "ff532941-80d3-4c2f-8fee-58e373f194d0"). InnerVolumeSpecName "kube-api-access-57fr9". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 16 10:37:17 addons-855148 kubelet[1973]: I0916 10:37:17.939762    1973 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0df928f-2218-475a-9940-df775ef8e077" path="/var/lib/kubelet/pods/b0df928f-2218-475a-9940-df775ef8e077/volumes"
	Sep 16 10:37:18 addons-855148 kubelet[1973]: I0916 10:37:18.005201    1973 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cslxm\" (UniqueName: \"kubernetes.io/projected/4332ab2c-d8b5-4a98-bd5a-54ed98d85a50-kube-api-access-cslxm\") pod \"4332ab2c-d8b5-4a98-bd5a-54ed98d85a50\" (UID: \"4332ab2c-d8b5-4a98-bd5a-54ed98d85a50\") "
	Sep 16 10:37:18 addons-855148 kubelet[1973]: I0916 10:37:18.005289    1973 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-57fr9\" (UniqueName: \"kubernetes.io/projected/ff532941-80d3-4c2f-8fee-58e373f194d0-kube-api-access-57fr9\") on node \"addons-855148\" DevicePath \"\""
	Sep 16 10:37:18 addons-855148 kubelet[1973]: I0916 10:37:18.008196    1973 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4332ab2c-d8b5-4a98-bd5a-54ed98d85a50-kube-api-access-cslxm" (OuterVolumeSpecName: "kube-api-access-cslxm") pod "4332ab2c-d8b5-4a98-bd5a-54ed98d85a50" (UID: "4332ab2c-d8b5-4a98-bd5a-54ed98d85a50"). InnerVolumeSpecName "kube-api-access-cslxm". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 16 10:37:18 addons-855148 kubelet[1973]: I0916 10:37:18.105758    1973 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-cslxm\" (UniqueName: \"kubernetes.io/projected/4332ab2c-d8b5-4a98-bd5a-54ed98d85a50-kube-api-access-cslxm\") on node \"addons-855148\" DevicePath \"\""
	Sep 16 10:37:18 addons-855148 kubelet[1973]: I0916 10:37:18.250850    1973 scope.go:117] "RemoveContainer" containerID="3c737257dfc48b2888c504fae0168c95710dcf9ff50c9a7dd9ce75c9e0478bbb"
	Sep 16 10:37:18 addons-855148 kubelet[1973]: I0916 10:37:18.298003    1973 scope.go:117] "RemoveContainer" containerID="3c737257dfc48b2888c504fae0168c95710dcf9ff50c9a7dd9ce75c9e0478bbb"
	Sep 16 10:37:18 addons-855148 kubelet[1973]: E0916 10:37:18.298959    1973 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 3c737257dfc48b2888c504fae0168c95710dcf9ff50c9a7dd9ce75c9e0478bbb" containerID="3c737257dfc48b2888c504fae0168c95710dcf9ff50c9a7dd9ce75c9e0478bbb"
	Sep 16 10:37:18 addons-855148 kubelet[1973]: I0916 10:37:18.298992    1973 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"3c737257dfc48b2888c504fae0168c95710dcf9ff50c9a7dd9ce75c9e0478bbb"} err="failed to get container status \"3c737257dfc48b2888c504fae0168c95710dcf9ff50c9a7dd9ce75c9e0478bbb\": rpc error: code = Unknown desc = Error response from daemon: No such container: 3c737257dfc48b2888c504fae0168c95710dcf9ff50c9a7dd9ce75c9e0478bbb"
	Sep 16 10:37:18 addons-855148 kubelet[1973]: I0916 10:37:18.299012    1973 scope.go:117] "RemoveContainer" containerID="0fec38a4a31bb83eca80890a7b4efc7cc20c366db267953e9b9235fb5336ab06"
	Sep 16 10:37:18 addons-855148 kubelet[1973]: I0916 10:37:18.318765    1973 scope.go:117] "RemoveContainer" containerID="0fec38a4a31bb83eca80890a7b4efc7cc20c366db267953e9b9235fb5336ab06"
	Sep 16 10:37:18 addons-855148 kubelet[1973]: E0916 10:37:18.319855    1973 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 0fec38a4a31bb83eca80890a7b4efc7cc20c366db267953e9b9235fb5336ab06" containerID="0fec38a4a31bb83eca80890a7b4efc7cc20c366db267953e9b9235fb5336ab06"
	Sep 16 10:37:18 addons-855148 kubelet[1973]: I0916 10:37:18.319890    1973 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"0fec38a4a31bb83eca80890a7b4efc7cc20c366db267953e9b9235fb5336ab06"} err="failed to get container status \"0fec38a4a31bb83eca80890a7b4efc7cc20c366db267953e9b9235fb5336ab06\": rpc error: code = Unknown desc = Error response from daemon: No such container: 0fec38a4a31bb83eca80890a7b4efc7cc20c366db267953e9b9235fb5336ab06"
	
	
	==> storage-provisioner [0fbef90405e1] <==
	I0916 10:24:49.591792       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 10:24:49.716943       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 10:24:49.717145       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 10:24:49.855485       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 10:24:49.863989       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-855148_e6e6b882-fe14-441d-9985-4be3806115c1!
	I0916 10:24:49.869534       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ac46d64a-ab3a-4008-8677-85cc648a3f6d", APIVersion:"v1", ResourceVersion:"855", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-855148_e6e6b882-fe14-441d-9985-4be3806115c1 became leader
	I0916 10:24:50.077083       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-855148_e6e6b882-fe14-441d-9985-4be3806115c1!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-855148 -n addons-855148
helpers_test.go:261: (dbg) Run:  kubectl --context addons-855148 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-855148 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-855148 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-855148/192.168.39.55
	Start Time:       Mon, 16 Sep 2024 10:28:03 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.29
	IPs:
	  IP:  10.244.0.29
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tbph2 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-tbph2:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason          Age                    From               Message
	  ----     ------          ----                   ----               -------
	  Normal   Scheduled       9m16s                  default-scheduler  Successfully assigned default/busybox to addons-855148
	  Normal   SandboxChanged  9m14s                  kubelet            Pod sandbox changed, it will be killed and re-created.
	  Warning  Failed          7m57s (x6 over 9m14s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling         7m44s (x4 over 9m15s)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed          7m43s (x4 over 9m15s)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed          7m43s (x4 over 9m15s)  kubelet            Error: ErrImagePull
	  Normal   BackOff         4m8s (x21 over 9m14s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (73.48s)

                                                
                                    

Test pass (309/341)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 8.35
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.05
9 TestDownloadOnly/v1.20.0/DeleteAll 0.12
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.11
12 TestDownloadOnly/v1.31.1/json-events 3.47
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.05
18 TestDownloadOnly/v1.31.1/DeleteAll 0.13
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.58
22 TestOffline 119.26
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 217.48
29 TestAddons/serial/Volcano 40.82
31 TestAddons/serial/GCPAuth/Namespaces 0.11
34 TestAddons/parallel/Ingress 21.5
35 TestAddons/parallel/InspektorGadget 10.95
36 TestAddons/parallel/MetricsServer 6.61
37 TestAddons/parallel/HelmTiller 11.17
39 TestAddons/parallel/CSI 46.82
40 TestAddons/parallel/Headlamp 18.7
41 TestAddons/parallel/CloudSpanner 6.55
42 TestAddons/parallel/LocalPath 54.85
43 TestAddons/parallel/NvidiaDevicePlugin 6.46
44 TestAddons/parallel/Yakd 10.63
45 TestAddons/StoppedEnableDisable 13.54
46 TestCertOptions 87.29
47 TestCertExpiration 307.43
48 TestDockerFlags 57.16
49 TestForceSystemdFlag 53.54
50 TestForceSystemdEnv 73.71
52 TestKVMDriverInstallOrUpdate 4.26
56 TestErrorSpam/setup 49.05
57 TestErrorSpam/start 0.33
58 TestErrorSpam/status 0.72
59 TestErrorSpam/pause 1.14
60 TestErrorSpam/unpause 1.3
61 TestErrorSpam/stop 7.01
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 97.3
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 39.41
68 TestFunctional/serial/KubeContext 0.04
69 TestFunctional/serial/KubectlGetPods 0.08
72 TestFunctional/serial/CacheCmd/cache/add_remote 2.29
73 TestFunctional/serial/CacheCmd/cache/add_local 1.21
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
75 TestFunctional/serial/CacheCmd/cache/list 0.04
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.21
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.09
78 TestFunctional/serial/CacheCmd/cache/delete 0.08
79 TestFunctional/serial/MinikubeKubectlCmd 0.1
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.09
81 TestFunctional/serial/ExtraConfig 40.97
82 TestFunctional/serial/ComponentHealth 0.06
83 TestFunctional/serial/LogsCmd 0.88
84 TestFunctional/serial/LogsFileCmd 0.91
85 TestFunctional/serial/InvalidService 3.81
87 TestFunctional/parallel/ConfigCmd 0.33
88 TestFunctional/parallel/DashboardCmd 13.15
89 TestFunctional/parallel/DryRun 0.26
90 TestFunctional/parallel/InternationalLanguage 0.13
91 TestFunctional/parallel/StatusCmd 0.76
95 TestFunctional/parallel/ServiceCmdConnect 27.53
96 TestFunctional/parallel/AddonsCmd 0.13
97 TestFunctional/parallel/PersistentVolumeClaim 46.61
99 TestFunctional/parallel/SSHCmd 0.37
100 TestFunctional/parallel/CpCmd 1.13
101 TestFunctional/parallel/MySQL 27.95
102 TestFunctional/parallel/FileSync 0.22
103 TestFunctional/parallel/CertSync 1.21
107 TestFunctional/parallel/NodeLabels 0.06
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.24
111 TestFunctional/parallel/License 0.16
121 TestFunctional/parallel/ServiceCmd/DeployApp 24.15
122 TestFunctional/parallel/ServiceCmd/List 0.41
123 TestFunctional/parallel/ServiceCmd/JSONOutput 0.41
124 TestFunctional/parallel/ServiceCmd/HTTPS 0.29
125 TestFunctional/parallel/ServiceCmd/Format 0.28
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.28
127 TestFunctional/parallel/ServiceCmd/URL 0.32
128 TestFunctional/parallel/ProfileCmd/profile_list 0.31
129 TestFunctional/parallel/MountCmd/any-port 6.44
130 TestFunctional/parallel/ProfileCmd/profile_json_output 0.28
131 TestFunctional/parallel/DockerEnv/bash 0.81
132 TestFunctional/parallel/Version/short 0.05
133 TestFunctional/parallel/Version/components 0.7
134 TestFunctional/parallel/UpdateContextCmd/no_changes 0.08
135 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
136 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
137 TestFunctional/parallel/ImageCommands/ImageListShort 0.3
138 TestFunctional/parallel/ImageCommands/ImageListTable 0.26
139 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
140 TestFunctional/parallel/ImageCommands/ImageListYaml 0.3
141 TestFunctional/parallel/ImageCommands/ImageBuild 3.07
142 TestFunctional/parallel/ImageCommands/Setup 1.5
143 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.04
144 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.79
145 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.59
146 TestFunctional/parallel/MountCmd/specific-port 1.93
147 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.39
148 TestFunctional/parallel/ImageCommands/ImageRemove 0.4
149 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.69
150 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.58
151 TestFunctional/parallel/MountCmd/VerifyCleanup 1.48
152 TestFunctional/delete_echo-server_images 0.03
153 TestFunctional/delete_my-image_image 0.01
154 TestFunctional/delete_minikube_cached_images 0.01
155 TestGvisorAddon 182.7
158 TestMultiControlPlane/serial/StartCluster 210.78
159 TestMultiControlPlane/serial/DeployApp 5.14
160 TestMultiControlPlane/serial/PingHostFromPods 1.14
161 TestMultiControlPlane/serial/AddWorkerNode 62.19
162 TestMultiControlPlane/serial/NodeLabels 0.06
163 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.5
164 TestMultiControlPlane/serial/CopyFile 12.09
165 TestMultiControlPlane/serial/StopSecondaryNode 13.07
166 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.38
167 TestMultiControlPlane/serial/RestartSecondaryNode 45.21
168 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.52
169 TestMultiControlPlane/serial/RestartClusterKeepsNodes 247.89
170 TestMultiControlPlane/serial/DeleteSecondaryNode 6.92
171 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.35
172 TestMultiControlPlane/serial/StopCluster 38.17
173 TestMultiControlPlane/serial/RestartCluster 123.19
174 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.35
175 TestMultiControlPlane/serial/AddSecondaryNode 79.8
176 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.51
179 TestImageBuild/serial/Setup 46.04
180 TestImageBuild/serial/NormalBuild 1.96
181 TestImageBuild/serial/BuildWithBuildArg 1.1
182 TestImageBuild/serial/BuildWithDockerIgnore 0.99
183 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.91
187 TestJSONOutput/start/Command 91.45
188 TestJSONOutput/start/Audit 0
190 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/pause/Command 0.55
194 TestJSONOutput/pause/Audit 0
196 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/unpause/Command 0.51
200 TestJSONOutput/unpause/Audit 0
202 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
205 TestJSONOutput/stop/Command 12.49
206 TestJSONOutput/stop/Audit 0
208 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
209 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
210 TestErrorJSONOutput 0.18
215 TestMainNoArgs 0.04
216 TestMinikubeProfile 98.79
219 TestMountStart/serial/StartWithMountFirst 28.4
220 TestMountStart/serial/VerifyMountFirst 0.35
221 TestMountStart/serial/StartWithMountSecond 31.68
222 TestMountStart/serial/VerifyMountSecond 0.37
223 TestMountStart/serial/DeleteFirst 0.69
224 TestMountStart/serial/VerifyMountPostDelete 0.41
225 TestMountStart/serial/Stop 2.44
226 TestMountStart/serial/RestartStopped 26.85
227 TestMountStart/serial/VerifyMountPostStop 0.36
230 TestMultiNode/serial/FreshStart2Nodes 122.04
231 TestMultiNode/serial/DeployApp2Nodes 4.06
232 TestMultiNode/serial/PingHostFrom2Pods 0.75
233 TestMultiNode/serial/AddNode 57.15
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.21
236 TestMultiNode/serial/CopyFile 6.91
237 TestMultiNode/serial/StopNode 3.29
238 TestMultiNode/serial/StartAfterStop 41.82
239 TestMultiNode/serial/RestartKeepsNodes 189.73
240 TestMultiNode/serial/DeleteNode 2.16
241 TestMultiNode/serial/StopMultiNode 24.9
242 TestMultiNode/serial/RestartMultiNode 197.86
243 TestMultiNode/serial/ValidateNameConflict 46.19
248 TestPreload 187.05
250 TestScheduledStopUnix 121.5
251 TestSkaffold 122.33
254 TestRunningBinaryUpgrade 184.2
256 TestKubernetesUpgrade 225.66
269 TestStoppedBinaryUpgrade/Setup 0.52
270 TestStoppedBinaryUpgrade/Upgrade 215.53
279 TestPause/serial/Start 128.51
280 TestStoppedBinaryUpgrade/MinikubeLogs 1.33
282 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
283 TestNoKubernetes/serial/StartWithK8s 49
284 TestPause/serial/SecondStartNoReconfiguration 98.29
285 TestNoKubernetes/serial/StartWithStopK8s 47.8
286 TestNoKubernetes/serial/Start 47.04
287 TestPause/serial/Pause 0.62
288 TestPause/serial/VerifyStatus 0.25
289 TestPause/serial/Unpause 0.57
290 TestPause/serial/PauseAgain 0.63
291 TestPause/serial/DeletePaused 0.98
292 TestPause/serial/VerifyDeletedResources 0.37
293 TestNoKubernetes/serial/VerifyK8sNotRunning 0.18
294 TestNoKubernetes/serial/ProfileList 0.75
295 TestNoKubernetes/serial/Stop 3.28
296 TestNoKubernetes/serial/StartNoArgs 69.34
297 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.2
298 TestNetworkPlugins/group/auto/Start 88.27
299 TestNetworkPlugins/group/kindnet/Start 94.14
300 TestNetworkPlugins/group/auto/KubeletFlags 0.21
301 TestNetworkPlugins/group/auto/NetCatPod 11.23
302 TestNetworkPlugins/group/calico/Start 87.49
303 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
304 TestNetworkPlugins/group/auto/DNS 0.2
305 TestNetworkPlugins/group/auto/Localhost 0.15
306 TestNetworkPlugins/group/auto/HairPin 0.14
307 TestNetworkPlugins/group/custom-flannel/Start 93.24
308 TestNetworkPlugins/group/kindnet/KubeletFlags 0.21
309 TestNetworkPlugins/group/kindnet/NetCatPod 10.23
310 TestNetworkPlugins/group/false/Start 100.15
311 TestNetworkPlugins/group/kindnet/DNS 0.16
312 TestNetworkPlugins/group/kindnet/Localhost 0.12
313 TestNetworkPlugins/group/kindnet/HairPin 0.13
314 TestNetworkPlugins/group/enable-default-cni/Start 137.93
315 TestNetworkPlugins/group/calico/ControllerPod 6.01
316 TestNetworkPlugins/group/calico/KubeletFlags 0.25
317 TestNetworkPlugins/group/calico/NetCatPod 14.3
318 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.2
319 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.26
320 TestNetworkPlugins/group/calico/DNS 0.18
321 TestNetworkPlugins/group/calico/Localhost 0.16
322 TestNetworkPlugins/group/calico/HairPin 0.17
323 TestNetworkPlugins/group/custom-flannel/DNS 0.19
324 TestNetworkPlugins/group/custom-flannel/Localhost 0.17
325 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
326 TestNetworkPlugins/group/false/KubeletFlags 0.49
327 TestNetworkPlugins/group/false/NetCatPod 11.56
328 TestNetworkPlugins/group/flannel/Start 69.62
329 TestNetworkPlugins/group/bridge/Start 123.42
330 TestNetworkPlugins/group/false/DNS 21.11
331 TestNetworkPlugins/group/false/Localhost 0.15
332 TestNetworkPlugins/group/false/HairPin 0.16
333 TestNetworkPlugins/group/kubenet/Start 102.14
334 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.2
335 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.25
336 TestNetworkPlugins/group/enable-default-cni/DNS 0.23
337 TestNetworkPlugins/group/enable-default-cni/Localhost 0.19
338 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
339 TestNetworkPlugins/group/flannel/ControllerPod 6.01
340 TestNetworkPlugins/group/flannel/KubeletFlags 0.21
341 TestNetworkPlugins/group/flannel/NetCatPod 12.24
343 TestStartStop/group/old-k8s-version/serial/FirstStart 144.38
344 TestNetworkPlugins/group/flannel/DNS 0.29
345 TestNetworkPlugins/group/flannel/Localhost 0.18
346 TestNetworkPlugins/group/flannel/HairPin 0.16
348 TestStartStop/group/no-preload/serial/FirstStart 103.46
349 TestNetworkPlugins/group/bridge/KubeletFlags 0.23
350 TestNetworkPlugins/group/bridge/NetCatPod 11.24
351 TestNetworkPlugins/group/bridge/DNS 0.15
352 TestNetworkPlugins/group/bridge/Localhost 0.13
353 TestNetworkPlugins/group/bridge/HairPin 0.22
354 TestNetworkPlugins/group/kubenet/KubeletFlags 0.21
355 TestNetworkPlugins/group/kubenet/NetCatPod 12.25
356 TestNetworkPlugins/group/kubenet/DNS 0.19
357 TestNetworkPlugins/group/kubenet/Localhost 0.16
358 TestNetworkPlugins/group/kubenet/HairPin 0.17
360 TestStartStop/group/embed-certs/serial/FirstStart 65.1
362 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 73.39
363 TestStartStop/group/no-preload/serial/DeployApp 8.32
364 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.29
365 TestStartStop/group/no-preload/serial/Stop 13.37
366 TestStartStop/group/old-k8s-version/serial/DeployApp 10.08
367 TestStartStop/group/embed-certs/serial/DeployApp 8.36
368 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
369 TestStartStop/group/no-preload/serial/SecondStart 296.76
370 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.03
371 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.14
372 TestStartStop/group/embed-certs/serial/Stop 13.42
373 TestStartStop/group/old-k8s-version/serial/Stop 13.38
374 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
375 TestStartStop/group/embed-certs/serial/SecondStart 303.03
376 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
377 TestStartStop/group/old-k8s-version/serial/SecondStart 426.17
378 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.33
379 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.99
380 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.67
381 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
382 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 316.75
383 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
384 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
385 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.21
386 TestStartStop/group/no-preload/serial/Pause 2.54
388 TestStartStop/group/newest-cni/serial/FirstStart 61.61
389 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
390 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
391 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.2
392 TestStartStop/group/embed-certs/serial/Pause 2.3
393 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
394 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
395 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.21
396 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.41
397 TestStartStop/group/newest-cni/serial/DeployApp 0
398 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.95
399 TestStartStop/group/newest-cni/serial/Stop 12.61
400 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
401 TestStartStop/group/newest-cni/serial/SecondStart 35.26
402 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
403 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
404 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.2
405 TestStartStop/group/newest-cni/serial/Pause 2.13
406 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
407 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
408 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.2
409 TestStartStop/group/old-k8s-version/serial/Pause 2.09
x
+
TestDownloadOnly/v1.20.0/json-events (8.35s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-363393 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-363393 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=kvm2 : (8.354206702s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (8.35s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-363393
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-363393: exit status 85 (54.195161ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-363393 | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC |          |
	|         | -p download-only-363393        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:23:31
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:23:31.927069   12053 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:23:31.927161   12053 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:23:31.927169   12053 out.go:358] Setting ErrFile to fd 2...
	I0916 10:23:31.927173   12053 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:23:31.927368   12053 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3871/.minikube/bin
	W0916 10:23:31.927481   12053 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19651-3871/.minikube/config/config.json: open /home/jenkins/minikube-integration/19651-3871/.minikube/config/config.json: no such file or directory
	I0916 10:23:31.928060   12053 out.go:352] Setting JSON to true
	I0916 10:23:31.928908   12053 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":361,"bootTime":1726481851,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:23:31.928996   12053 start.go:139] virtualization: kvm guest
	I0916 10:23:31.931157   12053 out.go:97] [download-only-363393] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 10:23:31.931305   12053 notify.go:220] Checking for updates...
	W0916 10:23:31.931303   12053 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19651-3871/.minikube/cache/preloaded-tarball: no such file or directory
	I0916 10:23:31.932582   12053 out.go:169] MINIKUBE_LOCATION=19651
	I0916 10:23:31.934010   12053 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:23:31.935329   12053 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19651-3871/kubeconfig
	I0916 10:23:31.936884   12053 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3871/.minikube
	I0916 10:23:31.938185   12053 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0916 10:23:31.940721   12053 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0916 10:23:31.940929   12053 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:23:32.049897   12053 out.go:97] Using the kvm2 driver based on user configuration
	I0916 10:23:32.049921   12053 start.go:297] selected driver: kvm2
	I0916 10:23:32.049927   12053 start.go:901] validating driver "kvm2" against <nil>
	I0916 10:23:32.050228   12053 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:23:32.050342   12053 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19651-3871/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0916 10:23:32.065139   12053 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0916 10:23:32.065184   12053 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 10:23:32.065674   12053 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0916 10:23:32.065838   12053 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0916 10:23:32.065869   12053 cni.go:84] Creating CNI manager for ""
	I0916 10:23:32.065921   12053 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0916 10:23:32.065978   12053 start.go:340] cluster config:
	{Name:download-only-363393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-363393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:23:32.066149   12053 iso.go:125] acquiring lock: {Name:mk549d8744cb1b2697cd1f4f389577317a4ca0fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:23:32.067885   12053 out.go:97] Downloading VM boot image ...
	I0916 10:23:32.067916   12053 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19651-3871/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso
	I0916 10:23:34.513695   12053 out.go:97] Starting "download-only-363393" primary control-plane node in "download-only-363393" cluster
	I0916 10:23:34.513727   12053 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0916 10:23:34.537832   12053 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0916 10:23:34.537862   12053 cache.go:56] Caching tarball of preloaded images
	I0916 10:23:34.538025   12053 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0916 10:23:34.540039   12053 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0916 10:23:34.540066   12053 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0916 10:23:34.565117   12053 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /home/jenkins/minikube-integration/19651-3871/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-363393 host does not exist
	  To start a cluster, run: "minikube start -p download-only-363393"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-363393
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (3.47s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-799669 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-799669 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=kvm2 : (3.466385645s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (3.47s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-799669
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-799669: exit status 85 (54.566813ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-363393 | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC |                     |
	|         | -p download-only-363393        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:23 UTC |
	| delete  | -p download-only-363393        | download-only-363393 | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:23 UTC |
	| start   | -o=json --download-only        | download-only-799669 | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC |                     |
	|         | -p download-only-799669        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:23:40
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:23:40.575571   12256 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:23:40.575692   12256 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:23:40.575700   12256 out.go:358] Setting ErrFile to fd 2...
	I0916 10:23:40.575705   12256 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:23:40.575857   12256 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3871/.minikube/bin
	I0916 10:23:40.576368   12256 out.go:352] Setting JSON to true
	I0916 10:23:40.577145   12256 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":370,"bootTime":1726481851,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:23:40.577233   12256 start.go:139] virtualization: kvm guest
	I0916 10:23:40.579417   12256 out.go:97] [download-only-799669] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 10:23:40.579550   12256 notify.go:220] Checking for updates...
	I0916 10:23:40.580956   12256 out.go:169] MINIKUBE_LOCATION=19651
	I0916 10:23:40.582202   12256 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:23:40.583601   12256 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19651-3871/kubeconfig
	I0916 10:23:40.584813   12256 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3871/.minikube
	I0916 10:23:40.585975   12256 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-799669 host does not exist
	  To start a cluster, run: "minikube start -p download-only-799669"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-799669
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.58s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-824353 --alsologtostderr --binary-mirror http://127.0.0.1:42639 --driver=kvm2 
helpers_test.go:175: Cleaning up "binary-mirror-824353" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-824353
--- PASS: TestBinaryMirror (0.58s)

                                                
                                    
x
+
TestOffline (119.26s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-189194 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-189194 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 : (1m58.299629585s)
helpers_test.go:175: Cleaning up "offline-docker-189194" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-189194
--- PASS: TestOffline (119.26s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-855148
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-855148: exit status 85 (49.93949ms)

                                                
                                                
-- stdout --
	* Profile "addons-855148" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-855148"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-855148
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-855148: exit status 85 (47.858372ms)

                                                
                                                
-- stdout --
	* Profile "addons-855148" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-855148"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (217.48s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-855148 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-855148 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m37.478728676s)
--- PASS: TestAddons/Setup (217.48s)

                                                
                                    
x
+
TestAddons/serial/Volcano (40.82s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:913: volcano-controller stabilized in 21.195973ms
addons_test.go:905: volcano-admission stabilized in 21.293572ms
addons_test.go:897: volcano-scheduler stabilized in 21.370609ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-wqcnb" [7fcc3652-61a8-47d2-8664-defd1ca3ad24] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003884702s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-4jl9w" [0bc2809b-5d1d-41fe-bfe0-df860176a114] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.004132898s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-wrcnn" [19a45f47-a4ed-4d9c-babe-37d70ed9730d] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003594395s
addons_test.go:932: (dbg) Run:  kubectl --context addons-855148 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-855148 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-855148 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [fcd26434-5dd8-40df-9a12-82d27df5df9c] Pending
helpers_test.go:344: "test-job-nginx-0" [fcd26434-5dd8-40df-9a12-82d27df5df9c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [fcd26434-5dd8-40df-9a12-82d27df5df9c] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 14.004392433s
addons_test.go:968: (dbg) Run:  out/minikube-linux-amd64 -p addons-855148 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-linux-amd64 -p addons-855148 addons disable volcano --alsologtostderr -v=1: (10.375625731s)
--- PASS: TestAddons/serial/Volcano (40.82s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-855148 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-855148 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (21.5s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-855148 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-855148 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-855148 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [0794fcb4-c7fc-4e1f-b067-cfc75fed6e8b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [0794fcb4-c7fc-4e1f-b067-cfc75fed6e8b] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.005276851s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-855148 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-855148 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-855148 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.55
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-855148 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-855148 addons disable ingress-dns --alsologtostderr -v=1: (1.630039174s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-855148 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-855148 addons disable ingress --alsologtostderr -v=1: (7.713825681s)
--- PASS: TestAddons/parallel/Ingress (21.50s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.95s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-6fj6x" [7d75750e-6b6f-4b2e-9b9b-809be8b90e32] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004604729s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-855148
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-855148: (5.947165532s)
--- PASS: TestAddons/parallel/InspektorGadget (10.95s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.61s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.417592ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-85g9f" [db4177ae-2e39-4669-9705-a670a5333534] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004358939s
addons_test.go:417: (dbg) Run:  kubectl --context addons-855148 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-855148 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.61s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.17s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 3.244673ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-dtqxt" [1b093cc0-708b-4faa-9390-dd65d9ebc725] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.004450794s
addons_test.go:475: (dbg) Run:  kubectl --context addons-855148 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-855148 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.647177575s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-855148 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.17s)

                                                
                                    
x
+
TestAddons/parallel/CSI (46.82s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 5.065267ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-855148 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-855148 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-855148 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-855148 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-855148 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-855148 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-855148 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-855148 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-855148 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-855148 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-855148 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-855148 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-855148 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-855148 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-855148 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-855148 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-855148 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [0d47fc9a-d991-4177-bf9a-7df3316f2699] Pending
helpers_test.go:344: "task-pv-pod" [0d47fc9a-d991-4177-bf9a-7df3316f2699] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [0d47fc9a-d991-4177-bf9a-7df3316f2699] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.003873106s
addons_test.go:590: (dbg) Run:  kubectl --context addons-855148 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-855148 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-855148 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-855148 delete pod task-pv-pod
addons_test.go:600: (dbg) Done: kubectl --context addons-855148 delete pod task-pv-pod: (1.341432369s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-855148 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-855148 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-855148 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-855148 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-855148 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-855148 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-855148 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-855148 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [1624b7f5-d6dc-4880-949a-54eeaebac65e] Pending
helpers_test.go:344: "task-pv-pod-restore" [1624b7f5-d6dc-4880-949a-54eeaebac65e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [1624b7f5-d6dc-4880-949a-54eeaebac65e] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003342198s
addons_test.go:632: (dbg) Run:  kubectl --context addons-855148 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-855148 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-855148 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-855148 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p addons-855148 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.578730026s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p addons-855148 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (46.82s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.7s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-855148 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-w2f2z" [7faf70f1-a501-44ed-a5ec-9af36b70a425] Pending
helpers_test.go:344: "headlamp-57fb76fcdb-w2f2z" [7faf70f1-a501-44ed-a5ec-9af36b70a425] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-w2f2z" [7faf70f1-a501-44ed-a5ec-9af36b70a425] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.005859531s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-855148 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 -p addons-855148 addons disable headlamp --alsologtostderr -v=1: (5.702003775s)
--- PASS: TestAddons/parallel/Headlamp (18.70s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.55s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-p7chr" [255ed12e-6e0c-40e3-b5e9-2ead670b2011] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004784747s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-855148
--- PASS: TestAddons/parallel/CloudSpanner (6.55s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (54.85s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-855148 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-855148 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-855148 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-855148 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-855148 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-855148 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-855148 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-855148 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-855148 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [ed6757b0-6dbe-48df-83e1-0ad557348a1e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [ed6757b0-6dbe-48df-83e1-0ad557348a1e] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [ed6757b0-6dbe-48df-83e1-0ad557348a1e] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.003520572s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-855148 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-amd64 -p addons-855148 ssh "cat /opt/local-path-provisioner/pvc-5340e152-c54e-4082-abd2-db1266cf31fd_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-855148 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-855148 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p addons-855148 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-amd64 -p addons-855148 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.987093688s)
--- PASS: TestAddons/parallel/LocalPath (54.85s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.46s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-vnxj2" [5798be15-1585-4c1b-84bd-507b89b7d751] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.005151816s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-855148
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.46s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.63s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-r2kv4" [6a9dd78c-334c-42e6-a149-39aa001aec2b] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003712294s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-855148 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-855148 addons disable yakd --alsologtostderr -v=1: (5.623431282s)
--- PASS: TestAddons/parallel/Yakd (10.63s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (13.54s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-855148
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-855148: (13.287936041s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-855148
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-855148
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-855148
--- PASS: TestAddons/StoppedEnableDisable (13.54s)

                                                
                                    
x
+
TestCertOptions (87.29s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-268415 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 
E0916 11:27:22.667049   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/addons-855148/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-268415 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 : (1m25.689747555s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-268415 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-268415 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-268415 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-268415" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-268415
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-268415: (1.066305141s)
--- PASS: TestCertOptions (87.29s)

                                                
                                    
x
+
TestCertExpiration (307.43s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-437782 --memory=2048 --cert-expiration=3m --driver=kvm2 
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-437782 --memory=2048 --cert-expiration=3m --driver=kvm2 : (1m30.670602019s)
E0916 11:26:49.104976   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/skaffold-413609/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-437782 --memory=2048 --cert-expiration=8760h --driver=kvm2 
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-437782 --memory=2048 --cert-expiration=8760h --driver=kvm2 : (35.698781866s)
helpers_test.go:175: Cleaning up "cert-expiration-437782" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-437782
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-437782: (1.06037325s)
--- PASS: TestCertExpiration (307.43s)

                                                
                                    
x
+
TestDockerFlags (57.16s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-810642 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 
E0916 11:26:28.623031   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/skaffold-413609/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-810642 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 : (55.74580298s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-810642 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-810642 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-810642" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-810642
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-810642: (1.019051301s)
--- PASS: TestDockerFlags (57.16s)

                                                
                                    
x
+
TestForceSystemdFlag (53.54s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-224674 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-224674 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 : (52.52861186s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-224674 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-224674" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-224674
--- PASS: TestForceSystemdFlag (53.54s)

                                                
                                    
x
+
TestForceSystemdEnv (73.71s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-629361 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-629361 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 : (1m12.6634329s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-629361 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-629361" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-629361
--- PASS: TestForceSystemdEnv (73.71s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.26s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (4.26s)

                                                
                                    
x
+
TestErrorSpam/setup (49.05s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-711232 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-711232 --driver=kvm2 
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-711232 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-711232 --driver=kvm2 : (49.046482986s)
--- PASS: TestErrorSpam/setup (49.05s)

                                                
                                    
x
+
TestErrorSpam/start (0.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-711232 --log_dir /tmp/nospam-711232 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-711232 --log_dir /tmp/nospam-711232 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-711232 --log_dir /tmp/nospam-711232 start --dry-run
--- PASS: TestErrorSpam/start (0.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.72s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-711232 --log_dir /tmp/nospam-711232 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-711232 --log_dir /tmp/nospam-711232 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-711232 --log_dir /tmp/nospam-711232 status
--- PASS: TestErrorSpam/status (0.72s)

                                                
                                    
x
+
TestErrorSpam/pause (1.14s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-711232 --log_dir /tmp/nospam-711232 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-711232 --log_dir /tmp/nospam-711232 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-711232 --log_dir /tmp/nospam-711232 pause
--- PASS: TestErrorSpam/pause (1.14s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.3s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-711232 --log_dir /tmp/nospam-711232 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-711232 --log_dir /tmp/nospam-711232 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-711232 --log_dir /tmp/nospam-711232 unpause
--- PASS: TestErrorSpam/unpause (1.30s)

                                                
                                    
x
+
TestErrorSpam/stop (7.01s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-711232 --log_dir /tmp/nospam-711232 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-711232 --log_dir /tmp/nospam-711232 stop: (3.493490264s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-711232 --log_dir /tmp/nospam-711232 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-711232 --log_dir /tmp/nospam-711232 stop: (1.965069621s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-711232 --log_dir /tmp/nospam-711232 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-711232 --log_dir /tmp/nospam-711232 stop: (1.555707079s)
--- PASS: TestErrorSpam/stop (7.01s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19651-3871/.minikube/files/etc/test/nested/copy/12041/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (97.3s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-384697 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-384697 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 : (1m37.296694557s)
--- PASS: TestFunctional/serial/StartWithProxy (97.30s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (39.41s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-384697 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-384697 --alsologtostderr -v=8: (39.412419215s)
functional_test.go:663: soft start took 39.413153567s for "functional-384697" cluster.
--- PASS: TestFunctional/serial/SoftStart (39.41s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-384697 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-384697 /tmp/TestFunctionalserialCacheCmdcacheadd_local1511663618/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 cache add minikube-local-cache-test:functional-384697
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 cache delete minikube-local-cache-test:functional-384697
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-384697
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-384697 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (197.523084ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 kubectl -- --context functional-384697 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-384697 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (40.97s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-384697 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-384697 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (40.964881958s)
functional_test.go:761: restart took 40.96499177s for "functional-384697" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (40.97s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-384697 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.88s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 logs
--- PASS: TestFunctional/serial/LogsCmd (0.88s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.91s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 logs --file /tmp/TestFunctionalserialLogsFileCmd2416101391/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.91s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.81s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-384697 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-384697
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-384697: exit status 115 (263.699855ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.232:32630 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-384697 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.81s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-384697 config get cpus: exit status 14 (56.431425ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-384697 config get cpus: exit status 14 (43.769805ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-384697 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-384697 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 21413: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.15s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-384697 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-384697 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (141.146275ms)

                                                
                                                
-- stdout --
	* [functional-384697] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19651
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19651-3871/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3871/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:42:10.385484   20979 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:42:10.385619   20979 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:42:10.385632   20979 out.go:358] Setting ErrFile to fd 2...
	I0916 10:42:10.385638   20979 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:42:10.385910   20979 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3871/.minikube/bin
	I0916 10:42:10.386580   20979 out.go:352] Setting JSON to false
	I0916 10:42:10.387990   20979 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":1479,"bootTime":1726481851,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:42:10.388191   20979 start.go:139] virtualization: kvm guest
	I0916 10:42:10.390459   20979 out.go:177] * [functional-384697] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 10:42:10.392058   20979 notify.go:220] Checking for updates...
	I0916 10:42:10.392106   20979 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:42:10.393430   20979 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:42:10.394660   20979 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3871/kubeconfig
	I0916 10:42:10.396016   20979 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3871/.minikube
	I0916 10:42:10.397307   20979 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:42:10.398692   20979 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:42:10.400365   20979 config.go:182] Loaded profile config "functional-384697": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:42:10.400969   20979 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 10:42:10.401019   20979 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:42:10.422519   20979 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40213
	I0916 10:42:10.423194   20979 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:42:10.423818   20979 main.go:141] libmachine: Using API Version  1
	I0916 10:42:10.423838   20979 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:42:10.424277   20979 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:42:10.424477   20979 main.go:141] libmachine: (functional-384697) Calling .DriverName
	I0916 10:42:10.424723   20979 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:42:10.425078   20979 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 10:42:10.425110   20979 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:42:10.442577   20979 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43771
	I0916 10:42:10.443033   20979 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:42:10.443539   20979 main.go:141] libmachine: Using API Version  1
	I0916 10:42:10.443564   20979 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:42:10.443906   20979 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:42:10.444108   20979 main.go:141] libmachine: (functional-384697) Calling .DriverName
	I0916 10:42:10.476885   20979 out.go:177] * Using the kvm2 driver based on existing profile
	I0916 10:42:10.478023   20979 start.go:297] selected driver: kvm2
	I0916 10:42:10.478033   20979 start.go:901] validating driver "kvm2" against &{Name:functional-384697 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-384697 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.232 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:42:10.478127   20979 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:42:10.480007   20979 out.go:201] 
	W0916 10:42:10.481084   20979 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0916 10:42:10.482222   20979 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-384697 --dry-run --alsologtostderr -v=1 --driver=kvm2 
--- PASS: TestFunctional/parallel/DryRun (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-384697 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-384697 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (126.50829ms)

                                                
                                                
-- stdout --
	* [functional-384697] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19651
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19651-3871/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3871/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:42:10.689449   21091 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:42:10.689534   21091 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:42:10.689541   21091 out.go:358] Setting ErrFile to fd 2...
	I0916 10:42:10.689546   21091 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:42:10.689844   21091 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3871/.minikube/bin
	I0916 10:42:10.690379   21091 out.go:352] Setting JSON to false
	I0916 10:42:10.691544   21091 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":1480,"bootTime":1726481851,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:42:10.691690   21091 start.go:139] virtualization: kvm guest
	I0916 10:42:10.693989   21091 out.go:177] * [functional-384697] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I0916 10:42:10.695191   21091 notify.go:220] Checking for updates...
	I0916 10:42:10.695209   21091 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:42:10.696425   21091 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:42:10.697493   21091 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3871/kubeconfig
	I0916 10:42:10.698548   21091 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3871/.minikube
	I0916 10:42:10.699648   21091 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:42:10.700845   21091 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:42:10.702395   21091 config.go:182] Loaded profile config "functional-384697": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:42:10.702786   21091 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 10:42:10.702831   21091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:42:10.718080   21091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33453
	I0916 10:42:10.718489   21091 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:42:10.719012   21091 main.go:141] libmachine: Using API Version  1
	I0916 10:42:10.719037   21091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:42:10.719366   21091 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:42:10.719532   21091 main.go:141] libmachine: (functional-384697) Calling .DriverName
	I0916 10:42:10.719749   21091 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:42:10.720087   21091 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 10:42:10.720122   21091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:42:10.734319   21091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35283
	I0916 10:42:10.734708   21091 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:42:10.735149   21091 main.go:141] libmachine: Using API Version  1
	I0916 10:42:10.735171   21091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:42:10.735515   21091 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:42:10.735670   21091 main.go:141] libmachine: (functional-384697) Calling .DriverName
	I0916 10:42:10.766899   21091 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0916 10:42:10.768069   21091 start.go:297] selected driver: kvm2
	I0916 10:42:10.768088   21091 start.go:901] validating driver "kvm2" against &{Name:functional-384697 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-384697 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.232 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:42:10.768234   21091 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:42:10.770185   21091 out.go:201] 
	W0916 10:42:10.771464   21091 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0916 10:42:10.772598   21091 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (27.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-384697 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-384697 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-txjsc" [dce0a80f-428e-4541-bd1e-3459084a228b] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-txjsc" [dce0a80f-428e-4541-bd1e-3459084a228b] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 27.009006712s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.232:32752
functional_test.go:1675: http://192.168.39.232:32752: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-txjsc

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.232:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.232:32752
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (27.53s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (46.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [2fee0a50-810a-4dc7-ba71-9cfb93220929] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004206766s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-384697 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-384697 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-384697 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-384697 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-384697 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-384697 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [13eaac98-7362-4363-bdf3-3367d6b5fe47] Pending
helpers_test.go:344: "sp-pod" [13eaac98-7362-4363-bdf3-3367d6b5fe47] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [13eaac98-7362-4363-bdf3-3367d6b5fe47] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 21.003765023s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-384697 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-384697 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-384697 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [38e967da-3cbb-4ec0-9680-c14e5a0a9829] Pending
helpers_test.go:344: "sp-pod" [38e967da-3cbb-4ec0-9680-c14e5a0a9829] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [38e967da-3cbb-4ec0-9680-c14e5a0a9829] Running
E0916 10:42:25.235724   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/addons-855148/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:42:27.797729   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/addons-855148/client.crt: no such file or directory" logger="UnhandledError"
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.004541991s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-384697 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (46.61s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 ssh -n functional-384697 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 cp functional-384697:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2303292857/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 ssh -n functional-384697 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 ssh -n functional-384697 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (27.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-384697 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-8lpcv" [c869c5cb-5960-46ee-8523-eca54c84952d] Pending
helpers_test.go:344: "mysql-6cdb49bbb-8lpcv" [c869c5cb-5960-46ee-8523-eca54c84952d] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-8lpcv" [c869c5cb-5960-46ee-8523-eca54c84952d] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 22.003436659s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-384697 exec mysql-6cdb49bbb-8lpcv -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-384697 exec mysql-6cdb49bbb-8lpcv -- mysql -ppassword -e "show databases;": exit status 1 (186.740143ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-384697 exec mysql-6cdb49bbb-8lpcv -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-384697 exec mysql-6cdb49bbb-8lpcv -- mysql -ppassword -e "show databases;": exit status 1 (363.553659ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-384697 exec mysql-6cdb49bbb-8lpcv -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-384697 exec mysql-6cdb49bbb-8lpcv -- mysql -ppassword -e "show databases;": exit status 1 (149.781457ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-384697 exec mysql-6cdb49bbb-8lpcv -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (27.95s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/12041/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 ssh "sudo cat /etc/test/nested/copy/12041/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/12041.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 ssh "sudo cat /etc/ssl/certs/12041.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/12041.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 ssh "sudo cat /usr/share/ca-certificates/12041.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/120412.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 ssh "sudo cat /etc/ssl/certs/120412.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/120412.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 ssh "sudo cat /usr/share/ca-certificates/120412.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-384697 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-384697 ssh "sudo systemctl is-active crio": exit status 1 (236.825104ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (24.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-384697 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-384697 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-nvqcg" [72421231-cdf4-4e0f-9fac-2de069279a4d] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-nvqcg" [72421231-cdf4-4e0f-9fac-2de069279a4d] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 24.003648149s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (24.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 service list -o json
functional_test.go:1494: Took "405.777888ms" to run "out/minikube-linux-amd64 -p functional-384697 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.232:32586
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.232:32586
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "262.172741ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "48.548779ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-384697 /tmp/TestFunctionalparallelMountCmdany-port2336258092/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726483330200413379" to /tmp/TestFunctionalparallelMountCmdany-port2336258092/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726483330200413379" to /tmp/TestFunctionalparallelMountCmdany-port2336258092/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726483330200413379" to /tmp/TestFunctionalparallelMountCmdany-port2336258092/001/test-1726483330200413379
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-384697 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (204.239164ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 16 10:42 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 16 10:42 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 16 10:42 test-1726483330200413379
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 ssh cat /mount-9p/test-1726483330200413379
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-384697 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [e57a406d-7109-4eb9-b2ef-079e266d275e] Pending
helpers_test.go:344: "busybox-mount" [e57a406d-7109-4eb9-b2ef-079e266d275e] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [e57a406d-7109-4eb9-b2ef-079e266d275e] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [e57a406d-7109-4eb9-b2ef-079e266d275e] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003356091s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-384697 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-384697 /tmp/TestFunctionalparallelMountCmdany-port2336258092/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "228.694466ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "48.60583ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-384697 docker-env) && out/minikube-linux-amd64 status -p functional-384697"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-384697 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-384697 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-384697
docker.io/kicbase/echo-server:functional-384697
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-384697 image ls --format short --alsologtostderr:
I0916 10:42:20.540988   22538 out.go:345] Setting OutFile to fd 1 ...
I0916 10:42:20.541094   22538 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 10:42:20.541103   22538 out.go:358] Setting ErrFile to fd 2...
I0916 10:42:20.541108   22538 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 10:42:20.541292   22538 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3871/.minikube/bin
I0916 10:42:20.541897   22538 config.go:182] Loaded profile config "functional-384697": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0916 10:42:20.541996   22538 config.go:182] Loaded profile config "functional-384697": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0916 10:42:20.542352   22538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0916 10:42:20.542392   22538 main.go:141] libmachine: Launching plugin server for driver kvm2
I0916 10:42:20.561381   22538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42269
I0916 10:42:20.561887   22538 main.go:141] libmachine: () Calling .GetVersion
I0916 10:42:20.562433   22538 main.go:141] libmachine: Using API Version  1
I0916 10:42:20.562459   22538 main.go:141] libmachine: () Calling .SetConfigRaw
I0916 10:42:20.562796   22538 main.go:141] libmachine: () Calling .GetMachineName
I0916 10:42:20.563004   22538 main.go:141] libmachine: (functional-384697) Calling .GetState
I0916 10:42:20.564890   22538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0916 10:42:20.564932   22538 main.go:141] libmachine: Launching plugin server for driver kvm2
I0916 10:42:20.581456   22538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44301
I0916 10:42:20.581978   22538 main.go:141] libmachine: () Calling .GetVersion
I0916 10:42:20.582640   22538 main.go:141] libmachine: Using API Version  1
I0916 10:42:20.582663   22538 main.go:141] libmachine: () Calling .SetConfigRaw
I0916 10:42:20.583024   22538 main.go:141] libmachine: () Calling .GetMachineName
I0916 10:42:20.583218   22538 main.go:141] libmachine: (functional-384697) Calling .DriverName
I0916 10:42:20.583452   22538 ssh_runner.go:195] Run: systemctl --version
I0916 10:42:20.583485   22538 main.go:141] libmachine: (functional-384697) Calling .GetSSHHostname
I0916 10:42:20.586455   22538 main.go:141] libmachine: (functional-384697) DBG | domain functional-384697 has defined MAC address 52:54:00:17:74:c1 in network mk-functional-384697
I0916 10:42:20.586940   22538 main.go:141] libmachine: (functional-384697) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:74:c1", ip: ""} in network mk-functional-384697: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:47 +0000 UTC Type:0 Mac:52:54:00:17:74:c1 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:functional-384697 Clientid:01:52:54:00:17:74:c1}
I0916 10:42:20.587081   22538 main.go:141] libmachine: (functional-384697) DBG | domain functional-384697 has defined IP address 192.168.39.232 and MAC address 52:54:00:17:74:c1 in network mk-functional-384697
I0916 10:42:20.587478   22538 main.go:141] libmachine: (functional-384697) Calling .GetSSHPort
I0916 10:42:20.587652   22538 main.go:141] libmachine: (functional-384697) Calling .GetSSHKeyPath
I0916 10:42:20.587826   22538 main.go:141] libmachine: (functional-384697) Calling .GetSSHUsername
I0916 10:42:20.587970   22538 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3871/.minikube/machines/functional-384697/id_rsa Username:docker}
I0916 10:42:20.721467   22538 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0916 10:42:20.758335   22538 main.go:141] libmachine: Making call to close driver server
I0916 10:42:20.758347   22538 main.go:141] libmachine: (functional-384697) Calling .Close
I0916 10:42:20.758600   22538 main.go:141] libmachine: Successfully made call to close driver server
I0916 10:42:20.758630   22538 main.go:141] libmachine: Making call to close connection to plugin binary
I0916 10:42:20.758639   22538 main.go:141] libmachine: Making call to close driver server
I0916 10:42:20.758606   22538 main.go:141] libmachine: (functional-384697) DBG | Closing plugin on server side
I0916 10:42:20.758648   22538 main.go:141] libmachine: (functional-384697) Calling .Close
I0916 10:42:20.758885   22538 main.go:141] libmachine: (functional-384697) DBG | Closing plugin on server side
I0916 10:42:20.758936   22538 main.go:141] libmachine: Successfully made call to close driver server
I0916 10:42:20.758954   22538 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-384697 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-scheduler              | v1.31.1           | 9aa1fad941575 | 67.4MB |
| registry.k8s.io/kube-proxy                  | v1.31.1           | 60c005f310ff3 | 91.5MB |
| docker.io/library/nginx                     | latest            | 39286ab8a5e14 | 188MB  |
| registry.k8s.io/coredns/coredns             | v1.11.3           | c69fa2e9cbf5f | 61.8MB |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| docker.io/kicbase/echo-server               | functional-384697 | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| docker.io/library/minikube-local-cache-test | functional-384697 | f2057a6de0cb1 | 30B    |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| registry.k8s.io/etcd                        | 3.5.15-0          | 2e96e5913fc06 | 148MB  |
| registry.k8s.io/pause                       | 3.10              | 873ed75102791 | 736kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/kube-controller-manager     | v1.31.1           | 175ffd71cce3d | 88.4MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/kube-apiserver              | v1.31.1           | 6bab7719df100 | 94.2MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-384697 image ls --format table --alsologtostderr:
I0916 10:42:21.362933   22662 out.go:345] Setting OutFile to fd 1 ...
I0916 10:42:21.363056   22662 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 10:42:21.363065   22662 out.go:358] Setting ErrFile to fd 2...
I0916 10:42:21.363070   22662 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 10:42:21.363263   22662 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3871/.minikube/bin
I0916 10:42:21.363834   22662 config.go:182] Loaded profile config "functional-384697": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0916 10:42:21.363926   22662 config.go:182] Loaded profile config "functional-384697": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0916 10:42:21.364291   22662 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0916 10:42:21.364324   22662 main.go:141] libmachine: Launching plugin server for driver kvm2
I0916 10:42:21.380189   22662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37909
I0916 10:42:21.380722   22662 main.go:141] libmachine: () Calling .GetVersion
I0916 10:42:21.381346   22662 main.go:141] libmachine: Using API Version  1
I0916 10:42:21.381370   22662 main.go:141] libmachine: () Calling .SetConfigRaw
I0916 10:42:21.381722   22662 main.go:141] libmachine: () Calling .GetMachineName
I0916 10:42:21.381881   22662 main.go:141] libmachine: (functional-384697) Calling .GetState
I0916 10:42:21.383816   22662 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0916 10:42:21.383860   22662 main.go:141] libmachine: Launching plugin server for driver kvm2
I0916 10:42:21.398348   22662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34355
I0916 10:42:21.398800   22662 main.go:141] libmachine: () Calling .GetVersion
I0916 10:42:21.399263   22662 main.go:141] libmachine: Using API Version  1
I0916 10:42:21.399285   22662 main.go:141] libmachine: () Calling .SetConfigRaw
I0916 10:42:21.399612   22662 main.go:141] libmachine: () Calling .GetMachineName
I0916 10:42:21.399767   22662 main.go:141] libmachine: (functional-384697) Calling .DriverName
I0916 10:42:21.399956   22662 ssh_runner.go:195] Run: systemctl --version
I0916 10:42:21.399983   22662 main.go:141] libmachine: (functional-384697) Calling .GetSSHHostname
I0916 10:42:21.402591   22662 main.go:141] libmachine: (functional-384697) DBG | domain functional-384697 has defined MAC address 52:54:00:17:74:c1 in network mk-functional-384697
I0916 10:42:21.403030   22662 main.go:141] libmachine: (functional-384697) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:74:c1", ip: ""} in network mk-functional-384697: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:47 +0000 UTC Type:0 Mac:52:54:00:17:74:c1 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:functional-384697 Clientid:01:52:54:00:17:74:c1}
I0916 10:42:21.403051   22662 main.go:141] libmachine: (functional-384697) DBG | domain functional-384697 has defined IP address 192.168.39.232 and MAC address 52:54:00:17:74:c1 in network mk-functional-384697
I0916 10:42:21.403252   22662 main.go:141] libmachine: (functional-384697) Calling .GetSSHPort
I0916 10:42:21.403426   22662 main.go:141] libmachine: (functional-384697) Calling .GetSSHKeyPath
I0916 10:42:21.403549   22662 main.go:141] libmachine: (functional-384697) Calling .GetSSHUsername
I0916 10:42:21.403674   22662 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3871/.minikube/machines/functional-384697/id_rsa Username:docker}
I0916 10:42:21.525630   22662 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0916 10:42:21.579638   22662 main.go:141] libmachine: Making call to close driver server
I0916 10:42:21.579659   22662 main.go:141] libmachine: (functional-384697) Calling .Close
I0916 10:42:21.579911   22662 main.go:141] libmachine: (functional-384697) DBG | Closing plugin on server side
I0916 10:42:21.579933   22662 main.go:141] libmachine: Successfully made call to close driver server
I0916 10:42:21.579946   22662 main.go:141] libmachine: Making call to close connection to plugin binary
I0916 10:42:21.579956   22662 main.go:141] libmachine: Making call to close driver server
I0916 10:42:21.579968   22662 main.go:141] libmachine: (functional-384697) Calling .Close
I0916 10:42:21.580207   22662 main.go:141] libmachine: (functional-384697) DBG | Closing plugin on server side
I0916 10:42:21.580256   22662 main.go:141] libmachine: Successfully made call to close driver server
I0916 10:42:21.580272   22662 main.go:141] libmachine: Making call to close connection to plugin binary
E0916 10:42:22.666901   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/addons-855148/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:42:22.673668   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/addons-855148/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:42:22.685025   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/addons-855148/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:42:22.706370   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/addons-855148/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:42:22.747731   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/addons-855148/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:42:22.829187   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/addons-855148/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:42:22.990744   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/addons-855148/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:42:23.312443   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/addons-855148/client.crt: no such file or directory" logger="UnhandledError"
2024/09/16 10:42:23 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-384697 image ls --format json --alsologtostderr:
[{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"61800000"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"148000000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"94200000"},{"id":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"88400000"},{"id":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cb
be954ea63002289416a13b","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"67400000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"f2057a6de0cb14bfb73d4e3fd9f76fc1a92bfcf3d7ed969d0fa2c3f8f85b0db2","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-384697"],"size":"30"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags
":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"91500000"},{"id":"39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"736000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-384697"],"size":"4940000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-384697 image ls --format json --alsologtostderr:
I0916 10:42:21.142861   22627 out.go:345] Setting OutFile to fd 1 ...
I0916 10:42:21.142968   22627 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 10:42:21.142976   22627 out.go:358] Setting ErrFile to fd 2...
I0916 10:42:21.142980   22627 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 10:42:21.143135   22627 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3871/.minikube/bin
I0916 10:42:21.143743   22627 config.go:182] Loaded profile config "functional-384697": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0916 10:42:21.143836   22627 config.go:182] Loaded profile config "functional-384697": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0916 10:42:21.144187   22627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0916 10:42:21.144222   22627 main.go:141] libmachine: Launching plugin server for driver kvm2
I0916 10:42:21.159723   22627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33385
I0916 10:42:21.160129   22627 main.go:141] libmachine: () Calling .GetVersion
I0916 10:42:21.160638   22627 main.go:141] libmachine: Using API Version  1
I0916 10:42:21.160661   22627 main.go:141] libmachine: () Calling .SetConfigRaw
I0916 10:42:21.161016   22627 main.go:141] libmachine: () Calling .GetMachineName
I0916 10:42:21.161217   22627 main.go:141] libmachine: (functional-384697) Calling .GetState
I0916 10:42:21.162975   22627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0916 10:42:21.163016   22627 main.go:141] libmachine: Launching plugin server for driver kvm2
I0916 10:42:21.177606   22627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46105
I0916 10:42:21.178049   22627 main.go:141] libmachine: () Calling .GetVersion
I0916 10:42:21.178610   22627 main.go:141] libmachine: Using API Version  1
I0916 10:42:21.178639   22627 main.go:141] libmachine: () Calling .SetConfigRaw
I0916 10:42:21.178950   22627 main.go:141] libmachine: () Calling .GetMachineName
I0916 10:42:21.179144   22627 main.go:141] libmachine: (functional-384697) Calling .DriverName
I0916 10:42:21.179347   22627 ssh_runner.go:195] Run: systemctl --version
I0916 10:42:21.179375   22627 main.go:141] libmachine: (functional-384697) Calling .GetSSHHostname
I0916 10:42:21.182440   22627 main.go:141] libmachine: (functional-384697) DBG | domain functional-384697 has defined MAC address 52:54:00:17:74:c1 in network mk-functional-384697
I0916 10:42:21.182915   22627 main.go:141] libmachine: (functional-384697) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:74:c1", ip: ""} in network mk-functional-384697: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:47 +0000 UTC Type:0 Mac:52:54:00:17:74:c1 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:functional-384697 Clientid:01:52:54:00:17:74:c1}
I0916 10:42:21.182941   22627 main.go:141] libmachine: (functional-384697) DBG | domain functional-384697 has defined IP address 192.168.39.232 and MAC address 52:54:00:17:74:c1 in network mk-functional-384697
I0916 10:42:21.183095   22627 main.go:141] libmachine: (functional-384697) Calling .GetSSHPort
I0916 10:42:21.183247   22627 main.go:141] libmachine: (functional-384697) Calling .GetSSHKeyPath
I0916 10:42:21.183390   22627 main.go:141] libmachine: (functional-384697) Calling .GetSSHUsername
I0916 10:42:21.183534   22627 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3871/.minikube/machines/functional-384697/id_rsa Username:docker}
I0916 10:42:21.278643   22627 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0916 10:42:21.314931   22627 main.go:141] libmachine: Making call to close driver server
I0916 10:42:21.314946   22627 main.go:141] libmachine: (functional-384697) Calling .Close
I0916 10:42:21.315247   22627 main.go:141] libmachine: Successfully made call to close driver server
I0916 10:42:21.315268   22627 main.go:141] libmachine: Making call to close connection to plugin binary
I0916 10:42:21.315271   22627 main.go:141] libmachine: (functional-384697) DBG | Closing plugin on server side
I0916 10:42:21.315276   22627 main.go:141] libmachine: Making call to close driver server
I0916 10:42:21.315285   22627 main.go:141] libmachine: (functional-384697) Calling .Close
I0916 10:42:21.315493   22627 main.go:141] libmachine: Successfully made call to close driver server
I0916 10:42:21.315509   22627 main.go:141] libmachine: Making call to close connection to plugin binary
I0916 10:42:21.315527   22627 main.go:141] libmachine: (functional-384697) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-384697 image ls --format yaml --alsologtostderr:
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "736000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-384697
size: "4940000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "94200000"
- id: 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "67400000"
- id: 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "88400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: f2057a6de0cb14bfb73d4e3fd9f76fc1a92bfcf3d7ed969d0fa2c3f8f85b0db2
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-384697
size: "30"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61800000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "91500000"
- id: 39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "148000000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-384697 image ls --format yaml --alsologtostderr:
I0916 10:42:20.840745   22561 out.go:345] Setting OutFile to fd 1 ...
I0916 10:42:20.840844   22561 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 10:42:20.840854   22561 out.go:358] Setting ErrFile to fd 2...
I0916 10:42:20.840859   22561 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 10:42:20.841036   22561 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3871/.minikube/bin
I0916 10:42:20.841628   22561 config.go:182] Loaded profile config "functional-384697": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0916 10:42:20.841740   22561 config.go:182] Loaded profile config "functional-384697": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0916 10:42:20.842118   22561 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0916 10:42:20.842163   22561 main.go:141] libmachine: Launching plugin server for driver kvm2
I0916 10:42:20.859676   22561 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37523
I0916 10:42:20.860349   22561 main.go:141] libmachine: () Calling .GetVersion
I0916 10:42:20.860916   22561 main.go:141] libmachine: Using API Version  1
I0916 10:42:20.860931   22561 main.go:141] libmachine: () Calling .SetConfigRaw
I0916 10:42:20.861305   22561 main.go:141] libmachine: () Calling .GetMachineName
I0916 10:42:20.861473   22561 main.go:141] libmachine: (functional-384697) Calling .GetState
I0916 10:42:20.863980   22561 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0916 10:42:20.864018   22561 main.go:141] libmachine: Launching plugin server for driver kvm2
I0916 10:42:20.878949   22561 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38871
I0916 10:42:20.879374   22561 main.go:141] libmachine: () Calling .GetVersion
I0916 10:42:20.879857   22561 main.go:141] libmachine: Using API Version  1
I0916 10:42:20.879880   22561 main.go:141] libmachine: () Calling .SetConfigRaw
I0916 10:42:20.880331   22561 main.go:141] libmachine: () Calling .GetMachineName
I0916 10:42:20.880463   22561 main.go:141] libmachine: (functional-384697) Calling .DriverName
I0916 10:42:20.880655   22561 ssh_runner.go:195] Run: systemctl --version
I0916 10:42:20.880683   22561 main.go:141] libmachine: (functional-384697) Calling .GetSSHHostname
I0916 10:42:20.883716   22561 main.go:141] libmachine: (functional-384697) DBG | domain functional-384697 has defined MAC address 52:54:00:17:74:c1 in network mk-functional-384697
I0916 10:42:20.884154   22561 main.go:141] libmachine: (functional-384697) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:74:c1", ip: ""} in network mk-functional-384697: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:47 +0000 UTC Type:0 Mac:52:54:00:17:74:c1 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:functional-384697 Clientid:01:52:54:00:17:74:c1}
I0916 10:42:20.884194   22561 main.go:141] libmachine: (functional-384697) DBG | domain functional-384697 has defined IP address 192.168.39.232 and MAC address 52:54:00:17:74:c1 in network mk-functional-384697
I0916 10:42:20.884278   22561 main.go:141] libmachine: (functional-384697) Calling .GetSSHPort
I0916 10:42:20.884468   22561 main.go:141] libmachine: (functional-384697) Calling .GetSSHKeyPath
I0916 10:42:20.884591   22561 main.go:141] libmachine: (functional-384697) Calling .GetSSHUsername
I0916 10:42:20.884714   22561 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3871/.minikube/machines/functional-384697/id_rsa Username:docker}
I0916 10:42:20.981567   22561 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0916 10:42:21.001002   22561 main.go:141] libmachine: Making call to close driver server
I0916 10:42:21.001017   22561 main.go:141] libmachine: (functional-384697) Calling .Close
I0916 10:42:21.001273   22561 main.go:141] libmachine: Successfully made call to close driver server
I0916 10:42:21.001299   22561 main.go:141] libmachine: Making call to close connection to plugin binary
I0916 10:42:21.001301   22561 main.go:141] libmachine: (functional-384697) DBG | Closing plugin on server side
I0916 10:42:21.001312   22561 main.go:141] libmachine: Making call to close driver server
I0916 10:42:21.001331   22561 main.go:141] libmachine: (functional-384697) Calling .Close
I0916 10:42:21.001534   22561 main.go:141] libmachine: (functional-384697) DBG | Closing plugin on server side
I0916 10:42:21.001534   22561 main.go:141] libmachine: Successfully made call to close driver server
I0916 10:42:21.001561   22561 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-384697 ssh pgrep buildkitd: exit status 1 (215.089357ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 image build -t localhost/my-image:functional-384697 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-384697 image build -t localhost/my-image:functional-384697 testdata/build --alsologtostderr: (2.662610738s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-384697 image build -t localhost/my-image:functional-384697 testdata/build --alsologtostderr:
I0916 10:42:21.052193   22614 out.go:345] Setting OutFile to fd 1 ...
I0916 10:42:21.052344   22614 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 10:42:21.052355   22614 out.go:358] Setting ErrFile to fd 2...
I0916 10:42:21.052359   22614 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 10:42:21.052515   22614 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3871/.minikube/bin
I0916 10:42:21.086904   22614 config.go:182] Loaded profile config "functional-384697": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0916 10:42:21.091727   22614 config.go:182] Loaded profile config "functional-384697": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0916 10:42:21.092152   22614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0916 10:42:21.092196   22614 main.go:141] libmachine: Launching plugin server for driver kvm2
I0916 10:42:21.110742   22614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34139
I0916 10:42:21.111220   22614 main.go:141] libmachine: () Calling .GetVersion
I0916 10:42:21.111830   22614 main.go:141] libmachine: Using API Version  1
I0916 10:42:21.111861   22614 main.go:141] libmachine: () Calling .SetConfigRaw
I0916 10:42:21.112253   22614 main.go:141] libmachine: () Calling .GetMachineName
I0916 10:42:21.112445   22614 main.go:141] libmachine: (functional-384697) Calling .GetState
I0916 10:42:21.114589   22614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0916 10:42:21.114631   22614 main.go:141] libmachine: Launching plugin server for driver kvm2
I0916 10:42:21.132051   22614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41359
I0916 10:42:21.132418   22614 main.go:141] libmachine: () Calling .GetVersion
I0916 10:42:21.132839   22614 main.go:141] libmachine: Using API Version  1
I0916 10:42:21.132861   22614 main.go:141] libmachine: () Calling .SetConfigRaw
I0916 10:42:21.133141   22614 main.go:141] libmachine: () Calling .GetMachineName
I0916 10:42:21.133374   22614 main.go:141] libmachine: (functional-384697) Calling .DriverName
I0916 10:42:21.133578   22614 ssh_runner.go:195] Run: systemctl --version
I0916 10:42:21.133616   22614 main.go:141] libmachine: (functional-384697) Calling .GetSSHHostname
I0916 10:42:21.136753   22614 main.go:141] libmachine: (functional-384697) DBG | domain functional-384697 has defined MAC address 52:54:00:17:74:c1 in network mk-functional-384697
I0916 10:42:21.137263   22614 main.go:141] libmachine: (functional-384697) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:74:c1", ip: ""} in network mk-functional-384697: {Iface:virbr1 ExpiryTime:2024-09-16 11:38:47 +0000 UTC Type:0 Mac:52:54:00:17:74:c1 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:functional-384697 Clientid:01:52:54:00:17:74:c1}
I0916 10:42:21.137327   22614 main.go:141] libmachine: (functional-384697) DBG | domain functional-384697 has defined IP address 192.168.39.232 and MAC address 52:54:00:17:74:c1 in network mk-functional-384697
I0916 10:42:21.137398   22614 main.go:141] libmachine: (functional-384697) Calling .GetSSHPort
I0916 10:42:21.137552   22614 main.go:141] libmachine: (functional-384697) Calling .GetSSHKeyPath
I0916 10:42:21.137707   22614 main.go:141] libmachine: (functional-384697) Calling .GetSSHUsername
I0916 10:42:21.137864   22614 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3871/.minikube/machines/functional-384697/id_rsa Username:docker}
I0916 10:42:21.221185   22614 build_images.go:161] Building image from path: /tmp/build.99088082.tar
I0916 10:42:21.221249   22614 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0916 10:42:21.233988   22614 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.99088082.tar
I0916 10:42:21.242006   22614 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.99088082.tar: stat -c "%s %y" /var/lib/minikube/build/build.99088082.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.99088082.tar': No such file or directory
I0916 10:42:21.242042   22614 ssh_runner.go:362] scp /tmp/build.99088082.tar --> /var/lib/minikube/build/build.99088082.tar (3072 bytes)
I0916 10:42:21.272281   22614 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.99088082
I0916 10:42:21.289158   22614 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.99088082 -xf /var/lib/minikube/build/build.99088082.tar
I0916 10:42:21.315442   22614 docker.go:360] Building image: /var/lib/minikube/build/build.99088082
I0916 10:42:21.315508   22614 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-384697 /var/lib/minikube/build/build.99088082
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.0s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.4s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:aba66342577c5d4deeeaf115513d36c0cc337d6fe00fe49d1e24375de8dc17de done
#8 naming to localhost/my-image:functional-384697 done
#8 DONE 0.1s
I0916 10:42:23.643626   22614 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-384697 /var/lib/minikube/build/build.99088082: (2.328097944s)
I0916 10:42:23.643697   22614 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.99088082
I0916 10:42:23.656376   22614 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.99088082.tar
I0916 10:42:23.671110   22614 build_images.go:217] Built localhost/my-image:functional-384697 from /tmp/build.99088082.tar
I0916 10:42:23.671135   22614 build_images.go:133] succeeded building to: functional-384697
I0916 10:42:23.671140   22614 build_images.go:134] failed building to: 
I0916 10:42:23.671184   22614 main.go:141] libmachine: Making call to close driver server
I0916 10:42:23.671198   22614 main.go:141] libmachine: (functional-384697) Calling .Close
I0916 10:42:23.671450   22614 main.go:141] libmachine: Successfully made call to close driver server
I0916 10:42:23.671466   22614 main.go:141] libmachine: Making call to close connection to plugin binary
I0916 10:42:23.671468   22614 main.go:141] libmachine: (functional-384697) DBG | Closing plugin on server side
I0916 10:42:23.671472   22614 main.go:141] libmachine: Making call to close driver server
I0916 10:42:23.671481   22614 main.go:141] libmachine: (functional-384697) Calling .Close
I0916 10:42:23.671711   22614 main.go:141] libmachine: (functional-384697) DBG | Closing plugin on server side
I0916 10:42:23.671717   22614 main.go:141] libmachine: Successfully made call to close driver server
I0916 10:42:23.671728   22614 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 image ls
E0916 10:42:23.954348   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/addons-855148/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.481969099s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-384697
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 image load --daemon kicbase/echo-server:functional-384697 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 image load --daemon kicbase/echo-server:functional-384697 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-384697
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 image load --daemon kicbase/echo-server:functional-384697 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.59s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-384697 /tmp/TestFunctionalparallelMountCmdspecific-port1223772047/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-384697 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (220.253018ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-384697 /tmp/TestFunctionalparallelMountCmdspecific-port1223772047/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-384697 ssh "sudo umount -f /mount-9p": exit status 1 (192.50011ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-384697 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-384697 /tmp/TestFunctionalparallelMountCmdspecific-port1223772047/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 image save kicbase/echo-server:functional-384697 /home/jenkins/workspace/KVM_Linux_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 image rm kicbase/echo-server:functional-384697 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 image load /home/jenkins/workspace/KVM_Linux_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-384697
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 image save --daemon kicbase/echo-server:functional-384697 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-384697
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-384697 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4096992515/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-384697 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4096992515/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-384697 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4096992515/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-384697 ssh "findmnt -T" /mount1: exit status 1 (304.831025ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-384697 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-384697 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-384697 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4096992515/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-384697 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4096992515/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-384697 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4096992515/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.48s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-384697
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-384697
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-384697
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestGvisorAddon (182.7s)

                                                
                                                
=== RUN   TestGvisorAddon
=== PAUSE TestGvisorAddon

                                                
                                                

                                                
                                                
=== CONT  TestGvisorAddon
gvisor_addon_test.go:52: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-413772 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:52: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-413772 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (55.987905705s)
gvisor_addon_test.go:58: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-413772 cache add gcr.io/k8s-minikube/gvisor-addon:2
gvisor_addon_test.go:58: (dbg) Done: out/minikube-linux-amd64 -p gvisor-413772 cache add gcr.io/k8s-minikube/gvisor-addon:2: (23.140151143s)
gvisor_addon_test.go:63: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-413772 addons enable gvisor
gvisor_addon_test.go:63: (dbg) Done: out/minikube-linux-amd64 -p gvisor-413772 addons enable gvisor: (4.112530875s)
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [37184900-234f-4a83-aedd-dc34f7615e15] Running
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 6.004224819s
gvisor_addon_test.go:73: (dbg) Run:  kubectl --context gvisor-413772 replace --force -f testdata/nginx-gvisor.yaml
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [45044834-a6bb-4944-910f-4e30e661a05f] Pending
helpers_test.go:344: "nginx-gvisor" [45044834-a6bb-4944-910f-4e30e661a05f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-gvisor" [45044834-a6bb-4944-910f-4e30e661a05f] Running
E0916 11:28:51.989952   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/skaffold-413609/client.crt: no such file or directory" logger="UnhandledError"
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 14.0040027s
gvisor_addon_test.go:83: (dbg) Run:  out/minikube-linux-amd64 stop -p gvisor-413772
gvisor_addon_test.go:83: (dbg) Done: out/minikube-linux-amd64 stop -p gvisor-413772: (6.569832419s)
gvisor_addon_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-413772 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-413772 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (1m0.52033901s)
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [37184900-234f-4a83-aedd-dc34f7615e15] Running / Ready:ContainersNotReady (containers with unready status: [gvisor]) / ContainersReady:ContainersNotReady (containers with unready status: [gvisor])
helpers_test.go:344: "gvisor" [37184900-234f-4a83-aedd-dc34f7615e15] Running
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 6.004631649s
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [45044834-a6bb-4944-910f-4e30e661a05f] Running / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 5.0048024s
helpers_test.go:175: Cleaning up "gvisor-413772" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p gvisor-413772
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p gvisor-413772: (1.171573784s)
--- PASS: TestGvisorAddon (182.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (210.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-263310 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2 
E0916 10:42:32.919806   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/addons-855148/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:42:43.161849   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/addons-855148/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:43:03.643401   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/addons-855148/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:43:44.605378   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/addons-855148/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:45:06.527422   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/addons-855148/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-263310 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2 : (3m30.160101259s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-263310 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (210.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-263310 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-263310 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-263310 -- rollout status deployment/busybox: (3.047528947s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-263310 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-263310 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-263310 -- exec busybox-7dff88458-d599l -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-263310 -- exec busybox-7dff88458-p2wmm -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-263310 -- exec busybox-7dff88458-sp847 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-263310 -- exec busybox-7dff88458-d599l -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-263310 -- exec busybox-7dff88458-p2wmm -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-263310 -- exec busybox-7dff88458-sp847 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-263310 -- exec busybox-7dff88458-d599l -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-263310 -- exec busybox-7dff88458-p2wmm -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-263310 -- exec busybox-7dff88458-sp847 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-263310 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-263310 -- exec busybox-7dff88458-d599l -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-263310 -- exec busybox-7dff88458-d599l -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-263310 -- exec busybox-7dff88458-p2wmm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-263310 -- exec busybox-7dff88458-p2wmm -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-263310 -- exec busybox-7dff88458-sp847 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-263310 -- exec busybox-7dff88458-sp847 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (62.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-263310 -v=7 --alsologtostderr
E0916 10:46:42.479124   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/functional-384697/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:46:42.485583   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/functional-384697/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:46:42.496975   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/functional-384697/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:46:42.518363   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/functional-384697/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:46:42.560125   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/functional-384697/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:46:42.642130   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/functional-384697/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:46:42.804451   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/functional-384697/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:46:43.126168   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/functional-384697/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:46:43.767933   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/functional-384697/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:46:45.049878   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/functional-384697/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:46:47.611267   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/functional-384697/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:46:52.732884   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/functional-384697/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:47:02.974246   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/functional-384697/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-263310 -v=7 --alsologtostderr: (1m1.383094133s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-263310 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (62.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-263310 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-263310 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-263310 cp testdata/cp-test.txt ha-263310:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-263310 ssh -n ha-263310 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-263310 cp ha-263310:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3276067533/001/cp-test_ha-263310.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-263310 ssh -n ha-263310 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-263310 cp ha-263310:/home/docker/cp-test.txt ha-263310-m02:/home/docker/cp-test_ha-263310_ha-263310-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-263310 ssh -n ha-263310 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-263310 ssh -n ha-263310-m02 "sudo cat /home/docker/cp-test_ha-263310_ha-263310-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-263310 cp ha-263310:/home/docker/cp-test.txt ha-263310-m03:/home/docker/cp-test_ha-263310_ha-263310-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-263310 ssh -n ha-263310 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-263310 ssh -n ha-263310-m03 "sudo cat /home/docker/cp-test_ha-263310_ha-263310-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-263310 cp ha-263310:/home/docker/cp-test.txt ha-263310-m04:/home/docker/cp-test_ha-263310_ha-263310-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-263310 ssh -n ha-263310 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-263310 ssh -n ha-263310-m04 "sudo cat /home/docker/cp-test_ha-263310_ha-263310-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-263310 cp testdata/cp-test.txt ha-263310-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-263310 ssh -n ha-263310-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-263310 cp ha-263310-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3276067533/001/cp-test_ha-263310-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-263310 ssh -n ha-263310-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-263310 cp ha-263310-m02:/home/docker/cp-test.txt ha-263310:/home/docker/cp-test_ha-263310-m02_ha-263310.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-263310 ssh -n ha-263310-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-263310 ssh -n ha-263310 "sudo cat /home/docker/cp-test_ha-263310-m02_ha-263310.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-263310 cp ha-263310-m02:/home/docker/cp-test.txt ha-263310-m03:/home/docker/cp-test_ha-263310-m02_ha-263310-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-263310 ssh -n ha-263310-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-263310 ssh -n ha-263310-m03 "sudo cat /home/docker/cp-test_ha-263310-m02_ha-263310-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-263310 cp ha-263310-m02:/home/docker/cp-test.txt ha-263310-m04:/home/docker/cp-test_ha-263310-m02_ha-263310-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-263310 ssh -n ha-263310-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-263310 ssh -n ha-263310-m04 "sudo cat /home/docker/cp-test_ha-263310-m02_ha-263310-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-263310 cp testdata/cp-test.txt ha-263310-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-263310 ssh -n ha-263310-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-263310 cp ha-263310-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3276067533/001/cp-test_ha-263310-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-263310 ssh -n ha-263310-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-263310 cp ha-263310-m03:/home/docker/cp-test.txt ha-263310:/home/docker/cp-test_ha-263310-m03_ha-263310.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-263310 ssh -n ha-263310-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-263310 ssh -n ha-263310 "sudo cat /home/docker/cp-test_ha-263310-m03_ha-263310.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-263310 cp ha-263310-m03:/home/docker/cp-test.txt ha-263310-m02:/home/docker/cp-test_ha-263310-m03_ha-263310-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-263310 ssh -n ha-263310-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-263310 ssh -n ha-263310-m02 "sudo cat /home/docker/cp-test_ha-263310-m03_ha-263310-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-263310 cp ha-263310-m03:/home/docker/cp-test.txt ha-263310-m04:/home/docker/cp-test_ha-263310-m03_ha-263310-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-263310 ssh -n ha-263310-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-263310 ssh -n ha-263310-m04 "sudo cat /home/docker/cp-test_ha-263310-m03_ha-263310-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-263310 cp testdata/cp-test.txt ha-263310-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-263310 ssh -n ha-263310-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-263310 cp ha-263310-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3276067533/001/cp-test_ha-263310-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-263310 ssh -n ha-263310-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-263310 cp ha-263310-m04:/home/docker/cp-test.txt ha-263310:/home/docker/cp-test_ha-263310-m04_ha-263310.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-263310 ssh -n ha-263310-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-263310 ssh -n ha-263310 "sudo cat /home/docker/cp-test_ha-263310-m04_ha-263310.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-263310 cp ha-263310-m04:/home/docker/cp-test.txt ha-263310-m02:/home/docker/cp-test_ha-263310-m04_ha-263310-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-263310 ssh -n ha-263310-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-263310 ssh -n ha-263310-m02 "sudo cat /home/docker/cp-test_ha-263310-m04_ha-263310-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-263310 cp ha-263310-m04:/home/docker/cp-test.txt ha-263310-m03:/home/docker/cp-test_ha-263310-m04_ha-263310-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-263310 ssh -n ha-263310-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-263310 ssh -n ha-263310-m03 "sudo cat /home/docker/cp-test_ha-263310-m04_ha-263310-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-263310 node stop m02 -v=7 --alsologtostderr
E0916 10:47:22.667466   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/addons-855148/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:47:23.456553   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/functional-384697/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-263310 node stop m02 -v=7 --alsologtostderr: (12.468907889s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-263310 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-263310 status -v=7 --alsologtostderr: exit status 7 (594.889245ms)

                                                
                                                
-- stdout --
	ha-263310
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-263310-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-263310-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-263310-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:47:35.042024   27017 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:47:35.042149   27017 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:47:35.042161   27017 out.go:358] Setting ErrFile to fd 2...
	I0916 10:47:35.042167   27017 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:47:35.042446   27017 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3871/.minikube/bin
	I0916 10:47:35.042612   27017 out.go:352] Setting JSON to false
	I0916 10:47:35.042643   27017 mustload.go:65] Loading cluster: ha-263310
	I0916 10:47:35.042706   27017 notify.go:220] Checking for updates...
	I0916 10:47:35.043032   27017 config.go:182] Loaded profile config "ha-263310": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:47:35.043046   27017 status.go:255] checking status of ha-263310 ...
	I0916 10:47:35.043461   27017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 10:47:35.043501   27017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:47:35.061849   27017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39169
	I0916 10:47:35.062330   27017 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:47:35.063074   27017 main.go:141] libmachine: Using API Version  1
	I0916 10:47:35.063111   27017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:47:35.063521   27017 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:47:35.063740   27017 main.go:141] libmachine: (ha-263310) Calling .GetState
	I0916 10:47:35.065799   27017 status.go:330] ha-263310 host status = "Running" (err=<nil>)
	I0916 10:47:35.065814   27017 host.go:66] Checking if "ha-263310" exists ...
	I0916 10:47:35.066102   27017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 10:47:35.066145   27017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:47:35.080723   27017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41105
	I0916 10:47:35.081154   27017 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:47:35.081648   27017 main.go:141] libmachine: Using API Version  1
	I0916 10:47:35.081678   27017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:47:35.081964   27017 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:47:35.082126   27017 main.go:141] libmachine: (ha-263310) Calling .GetIP
	I0916 10:47:35.084647   27017 main.go:141] libmachine: (ha-263310) DBG | domain ha-263310 has defined MAC address 52:54:00:99:57:00 in network mk-ha-263310
	I0916 10:47:35.085127   27017 main.go:141] libmachine: (ha-263310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:57:00", ip: ""} in network mk-ha-263310: {Iface:virbr1 ExpiryTime:2024-09-16 11:42:44 +0000 UTC Type:0 Mac:52:54:00:99:57:00 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:ha-263310 Clientid:01:52:54:00:99:57:00}
	I0916 10:47:35.085145   27017 main.go:141] libmachine: (ha-263310) DBG | domain ha-263310 has defined IP address 192.168.39.142 and MAC address 52:54:00:99:57:00 in network mk-ha-263310
	I0916 10:47:35.085282   27017 host.go:66] Checking if "ha-263310" exists ...
	I0916 10:47:35.085677   27017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 10:47:35.085731   27017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:47:35.099587   27017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44719
	I0916 10:47:35.100000   27017 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:47:35.100484   27017 main.go:141] libmachine: Using API Version  1
	I0916 10:47:35.100503   27017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:47:35.100784   27017 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:47:35.100946   27017 main.go:141] libmachine: (ha-263310) Calling .DriverName
	I0916 10:47:35.101152   27017 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:47:35.101178   27017 main.go:141] libmachine: (ha-263310) Calling .GetSSHHostname
	I0916 10:47:35.103679   27017 main.go:141] libmachine: (ha-263310) DBG | domain ha-263310 has defined MAC address 52:54:00:99:57:00 in network mk-ha-263310
	I0916 10:47:35.104153   27017 main.go:141] libmachine: (ha-263310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:57:00", ip: ""} in network mk-ha-263310: {Iface:virbr1 ExpiryTime:2024-09-16 11:42:44 +0000 UTC Type:0 Mac:52:54:00:99:57:00 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:ha-263310 Clientid:01:52:54:00:99:57:00}
	I0916 10:47:35.104194   27017 main.go:141] libmachine: (ha-263310) DBG | domain ha-263310 has defined IP address 192.168.39.142 and MAC address 52:54:00:99:57:00 in network mk-ha-263310
	I0916 10:47:35.104338   27017 main.go:141] libmachine: (ha-263310) Calling .GetSSHPort
	I0916 10:47:35.104489   27017 main.go:141] libmachine: (ha-263310) Calling .GetSSHKeyPath
	I0916 10:47:35.104628   27017 main.go:141] libmachine: (ha-263310) Calling .GetSSHUsername
	I0916 10:47:35.104759   27017 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3871/.minikube/machines/ha-263310/id_rsa Username:docker}
	I0916 10:47:35.187660   27017 ssh_runner.go:195] Run: systemctl --version
	I0916 10:47:35.193577   27017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:47:35.209683   27017 kubeconfig.go:125] found "ha-263310" server: "https://192.168.39.254:8443"
	I0916 10:47:35.209717   27017 api_server.go:166] Checking apiserver status ...
	I0916 10:47:35.209756   27017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:47:35.223647   27017 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1946/cgroup
	W0916 10:47:35.234606   27017 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1946/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0916 10:47:35.234652   27017 ssh_runner.go:195] Run: ls
	I0916 10:47:35.239053   27017 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0916 10:47:35.243899   27017 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0916 10:47:35.243918   27017 status.go:422] ha-263310 apiserver status = Running (err=<nil>)
	I0916 10:47:35.243926   27017 status.go:257] ha-263310 status: &{Name:ha-263310 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 10:47:35.243953   27017 status.go:255] checking status of ha-263310-m02 ...
	I0916 10:47:35.244246   27017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 10:47:35.244284   27017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:47:35.258542   27017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43735
	I0916 10:47:35.258878   27017 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:47:35.259292   27017 main.go:141] libmachine: Using API Version  1
	I0916 10:47:35.259313   27017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:47:35.259604   27017 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:47:35.259751   27017 main.go:141] libmachine: (ha-263310-m02) Calling .GetState
	I0916 10:47:35.261322   27017 status.go:330] ha-263310-m02 host status = "Stopped" (err=<nil>)
	I0916 10:47:35.261334   27017 status.go:343] host is not running, skipping remaining checks
	I0916 10:47:35.261339   27017 status.go:257] ha-263310-m02 status: &{Name:ha-263310-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 10:47:35.261353   27017 status.go:255] checking status of ha-263310-m03 ...
	I0916 10:47:35.261632   27017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 10:47:35.261663   27017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:47:35.275929   27017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33093
	I0916 10:47:35.276270   27017 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:47:35.276694   27017 main.go:141] libmachine: Using API Version  1
	I0916 10:47:35.276715   27017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:47:35.277083   27017 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:47:35.277249   27017 main.go:141] libmachine: (ha-263310-m03) Calling .GetState
	I0916 10:47:35.278839   27017 status.go:330] ha-263310-m03 host status = "Running" (err=<nil>)
	I0916 10:47:35.278852   27017 host.go:66] Checking if "ha-263310-m03" exists ...
	I0916 10:47:35.279190   27017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 10:47:35.279226   27017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:47:35.294183   27017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45843
	I0916 10:47:35.294508   27017 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:47:35.294965   27017 main.go:141] libmachine: Using API Version  1
	I0916 10:47:35.294982   27017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:47:35.295328   27017 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:47:35.295512   27017 main.go:141] libmachine: (ha-263310-m03) Calling .GetIP
	I0916 10:47:35.298365   27017 main.go:141] libmachine: (ha-263310-m03) DBG | domain ha-263310-m03 has defined MAC address 52:54:00:6e:2c:ab in network mk-ha-263310
	I0916 10:47:35.298782   27017 main.go:141] libmachine: (ha-263310-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:2c:ab", ip: ""} in network mk-ha-263310: {Iface:virbr1 ExpiryTime:2024-09-16 11:44:54 +0000 UTC Type:0 Mac:52:54:00:6e:2c:ab Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:ha-263310-m03 Clientid:01:52:54:00:6e:2c:ab}
	I0916 10:47:35.298815   27017 main.go:141] libmachine: (ha-263310-m03) DBG | domain ha-263310-m03 has defined IP address 192.168.39.129 and MAC address 52:54:00:6e:2c:ab in network mk-ha-263310
	I0916 10:47:35.298963   27017 host.go:66] Checking if "ha-263310-m03" exists ...
	I0916 10:47:35.299384   27017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 10:47:35.299426   27017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:47:35.313226   27017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35619
	I0916 10:47:35.313695   27017 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:47:35.314124   27017 main.go:141] libmachine: Using API Version  1
	I0916 10:47:35.314142   27017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:47:35.314454   27017 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:47:35.314620   27017 main.go:141] libmachine: (ha-263310-m03) Calling .DriverName
	I0916 10:47:35.314773   27017 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:47:35.314796   27017 main.go:141] libmachine: (ha-263310-m03) Calling .GetSSHHostname
	I0916 10:47:35.317304   27017 main.go:141] libmachine: (ha-263310-m03) DBG | domain ha-263310-m03 has defined MAC address 52:54:00:6e:2c:ab in network mk-ha-263310
	I0916 10:47:35.317694   27017 main.go:141] libmachine: (ha-263310-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:2c:ab", ip: ""} in network mk-ha-263310: {Iface:virbr1 ExpiryTime:2024-09-16 11:44:54 +0000 UTC Type:0 Mac:52:54:00:6e:2c:ab Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:ha-263310-m03 Clientid:01:52:54:00:6e:2c:ab}
	I0916 10:47:35.317715   27017 main.go:141] libmachine: (ha-263310-m03) DBG | domain ha-263310-m03 has defined IP address 192.168.39.129 and MAC address 52:54:00:6e:2c:ab in network mk-ha-263310
	I0916 10:47:35.317873   27017 main.go:141] libmachine: (ha-263310-m03) Calling .GetSSHPort
	I0916 10:47:35.318052   27017 main.go:141] libmachine: (ha-263310-m03) Calling .GetSSHKeyPath
	I0916 10:47:35.318189   27017 main.go:141] libmachine: (ha-263310-m03) Calling .GetSSHUsername
	I0916 10:47:35.318317   27017 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3871/.minikube/machines/ha-263310-m03/id_rsa Username:docker}
	I0916 10:47:35.403100   27017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:47:35.417849   27017 kubeconfig.go:125] found "ha-263310" server: "https://192.168.39.254:8443"
	I0916 10:47:35.417874   27017 api_server.go:166] Checking apiserver status ...
	I0916 10:47:35.417902   27017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:47:35.432668   27017 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1704/cgroup
	W0916 10:47:35.441754   27017 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1704/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0916 10:47:35.441802   27017 ssh_runner.go:195] Run: ls
	I0916 10:47:35.446055   27017 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0916 10:47:35.450203   27017 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0916 10:47:35.450229   27017 status.go:422] ha-263310-m03 apiserver status = Running (err=<nil>)
	I0916 10:47:35.450238   27017 status.go:257] ha-263310-m03 status: &{Name:ha-263310-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 10:47:35.450252   27017 status.go:255] checking status of ha-263310-m04 ...
	I0916 10:47:35.450528   27017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 10:47:35.450566   27017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:47:35.465075   27017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37163
	I0916 10:47:35.465515   27017 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:47:35.465934   27017 main.go:141] libmachine: Using API Version  1
	I0916 10:47:35.465956   27017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:47:35.466239   27017 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:47:35.466422   27017 main.go:141] libmachine: (ha-263310-m04) Calling .GetState
	I0916 10:47:35.468024   27017 status.go:330] ha-263310-m04 host status = "Running" (err=<nil>)
	I0916 10:47:35.468041   27017 host.go:66] Checking if "ha-263310-m04" exists ...
	I0916 10:47:35.468311   27017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 10:47:35.468342   27017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:47:35.482445   27017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43409
	I0916 10:47:35.482873   27017 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:47:35.483338   27017 main.go:141] libmachine: Using API Version  1
	I0916 10:47:35.483358   27017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:47:35.483685   27017 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:47:35.483851   27017 main.go:141] libmachine: (ha-263310-m04) Calling .GetIP
	I0916 10:47:35.486445   27017 main.go:141] libmachine: (ha-263310-m04) DBG | domain ha-263310-m04 has defined MAC address 52:54:00:5e:32:dc in network mk-ha-263310
	I0916 10:47:35.486866   27017 main.go:141] libmachine: (ha-263310-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:32:dc", ip: ""} in network mk-ha-263310: {Iface:virbr1 ExpiryTime:2024-09-16 11:46:22 +0000 UTC Type:0 Mac:52:54:00:5e:32:dc Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-263310-m04 Clientid:01:52:54:00:5e:32:dc}
	I0916 10:47:35.486895   27017 main.go:141] libmachine: (ha-263310-m04) DBG | domain ha-263310-m04 has defined IP address 192.168.39.148 and MAC address 52:54:00:5e:32:dc in network mk-ha-263310
	I0916 10:47:35.487015   27017 host.go:66] Checking if "ha-263310-m04" exists ...
	I0916 10:47:35.487377   27017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 10:47:35.487422   27017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:47:35.501609   27017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36281
	I0916 10:47:35.502060   27017 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:47:35.502497   27017 main.go:141] libmachine: Using API Version  1
	I0916 10:47:35.502515   27017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:47:35.502795   27017 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:47:35.502945   27017 main.go:141] libmachine: (ha-263310-m04) Calling .DriverName
	I0916 10:47:35.503156   27017 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:47:35.503177   27017 main.go:141] libmachine: (ha-263310-m04) Calling .GetSSHHostname
	I0916 10:47:35.505753   27017 main.go:141] libmachine: (ha-263310-m04) DBG | domain ha-263310-m04 has defined MAC address 52:54:00:5e:32:dc in network mk-ha-263310
	I0916 10:47:35.506150   27017 main.go:141] libmachine: (ha-263310-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:32:dc", ip: ""} in network mk-ha-263310: {Iface:virbr1 ExpiryTime:2024-09-16 11:46:22 +0000 UTC Type:0 Mac:52:54:00:5e:32:dc Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-263310-m04 Clientid:01:52:54:00:5e:32:dc}
	I0916 10:47:35.506167   27017 main.go:141] libmachine: (ha-263310-m04) DBG | domain ha-263310-m04 has defined IP address 192.168.39.148 and MAC address 52:54:00:5e:32:dc in network mk-ha-263310
	I0916 10:47:35.506320   27017 main.go:141] libmachine: (ha-263310-m04) Calling .GetSSHPort
	I0916 10:47:35.506478   27017 main.go:141] libmachine: (ha-263310-m04) Calling .GetSSHKeyPath
	I0916 10:47:35.506598   27017 main.go:141] libmachine: (ha-263310-m04) Calling .GetSSHUsername
	I0916 10:47:35.506723   27017 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3871/.minikube/machines/ha-263310-m04/id_rsa Username:docker}
	I0916 10:47:35.581862   27017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:47:35.595658   27017 status.go:257] ha-263310-m04 status: &{Name:ha-263310-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (45.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-263310 node start m02 -v=7 --alsologtostderr
E0916 10:47:50.369683   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/addons-855148/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:48:04.418087   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/functional-384697/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p ha-263310 node start m02 -v=7 --alsologtostderr: (44.348684372s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-263310 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (45.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (247.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-263310 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-263310 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-linux-amd64 stop -p ha-263310 -v=7 --alsologtostderr: (40.631284825s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-263310 --wait=true -v=7 --alsologtostderr
E0916 10:49:26.339642   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/functional-384697/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:51:42.478548   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/functional-384697/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:52:10.182109   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/functional-384697/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:52:22.666759   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/addons-855148/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-263310 --wait=true -v=7 --alsologtostderr: (3m27.170717774s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-263310
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (247.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (6.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-263310 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-263310 node delete m03 -v=7 --alsologtostderr: (6.223893993s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-263310 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (6.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (38.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-263310 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-amd64 -p ha-263310 stop -v=7 --alsologtostderr: (38.069914903s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-263310 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-263310 status -v=7 --alsologtostderr: exit status 7 (98.209727ms)

                                                
                                                
-- stdout --
	ha-263310
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-263310-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-263310-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:53:14.971283   29401 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:53:14.971417   29401 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:53:14.971427   29401 out.go:358] Setting ErrFile to fd 2...
	I0916 10:53:14.971434   29401 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:53:14.971610   29401 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3871/.minikube/bin
	I0916 10:53:14.971804   29401 out.go:352] Setting JSON to false
	I0916 10:53:14.971837   29401 mustload.go:65] Loading cluster: ha-263310
	I0916 10:53:14.971885   29401 notify.go:220] Checking for updates...
	I0916 10:53:14.972249   29401 config.go:182] Loaded profile config "ha-263310": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:53:14.972265   29401 status.go:255] checking status of ha-263310 ...
	I0916 10:53:14.972664   29401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 10:53:14.972726   29401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:53:14.993102   29401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38127
	I0916 10:53:14.993670   29401 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:53:14.994217   29401 main.go:141] libmachine: Using API Version  1
	I0916 10:53:14.994235   29401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:53:14.994634   29401 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:53:14.994815   29401 main.go:141] libmachine: (ha-263310) Calling .GetState
	I0916 10:53:14.996339   29401 status.go:330] ha-263310 host status = "Stopped" (err=<nil>)
	I0916 10:53:14.996353   29401 status.go:343] host is not running, skipping remaining checks
	I0916 10:53:14.996360   29401 status.go:257] ha-263310 status: &{Name:ha-263310 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 10:53:14.996403   29401 status.go:255] checking status of ha-263310-m02 ...
	I0916 10:53:14.996670   29401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 10:53:14.996705   29401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:53:15.010477   29401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37095
	I0916 10:53:15.010780   29401 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:53:15.011159   29401 main.go:141] libmachine: Using API Version  1
	I0916 10:53:15.011176   29401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:53:15.011500   29401 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:53:15.011645   29401 main.go:141] libmachine: (ha-263310-m02) Calling .GetState
	I0916 10:53:15.013005   29401 status.go:330] ha-263310-m02 host status = "Stopped" (err=<nil>)
	I0916 10:53:15.013018   29401 status.go:343] host is not running, skipping remaining checks
	I0916 10:53:15.013025   29401 status.go:257] ha-263310-m02 status: &{Name:ha-263310-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 10:53:15.013047   29401 status.go:255] checking status of ha-263310-m04 ...
	I0916 10:53:15.013414   29401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 10:53:15.013448   29401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 10:53:15.027491   29401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44023
	I0916 10:53:15.027916   29401 main.go:141] libmachine: () Calling .GetVersion
	I0916 10:53:15.028409   29401 main.go:141] libmachine: Using API Version  1
	I0916 10:53:15.028429   29401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 10:53:15.028703   29401 main.go:141] libmachine: () Calling .GetMachineName
	I0916 10:53:15.028951   29401 main.go:141] libmachine: (ha-263310-m04) Calling .GetState
	I0916 10:53:15.030437   29401 status.go:330] ha-263310-m04 host status = "Stopped" (err=<nil>)
	I0916 10:53:15.030457   29401 status.go:343] host is not running, skipping remaining checks
	I0916 10:53:15.030462   29401 status.go:257] ha-263310-m04 status: &{Name:ha-263310-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (38.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (123.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-263310 --wait=true -v=7 --alsologtostderr --driver=kvm2 
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-263310 --wait=true -v=7 --alsologtostderr --driver=kvm2 : (2m2.488458761s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-263310 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (123.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (79.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-263310 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-263310 --control-plane -v=7 --alsologtostderr: (1m18.997349623s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-263310 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (79.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.51s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (46.04s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-228840 --driver=kvm2 
E0916 10:56:42.478916   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/functional-384697/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:57:22.666793   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/addons-855148/client.crt: no such file or directory" logger="UnhandledError"
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-228840 --driver=kvm2 : (46.036059647s)
--- PASS: TestImageBuild/serial/Setup (46.04s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.96s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-228840
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-228840: (1.95982376s)
--- PASS: TestImageBuild/serial/NormalBuild (1.96s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.1s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-228840
image_test.go:99: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-228840: (1.103586795s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.10s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.99s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-228840
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.99s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.91s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-228840
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.91s)

                                                
                                    
x
+
TestJSONOutput/start/Command (91.45s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-970673 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 
E0916 10:58:45.731744   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/addons-855148/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-970673 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 : (1m31.452890432s)
--- PASS: TestJSONOutput/start/Command (91.45s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.55s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-970673 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.55s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.51s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-970673 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.51s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (12.49s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-970673 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-970673 --output=json --user=testUser: (12.49126302s)
--- PASS: TestJSONOutput/stop/Command (12.49s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.18s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-281961 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-281961 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (55.102355ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c6e2c490-5a35-4738-a945-56b9d548beb7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-281961] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ec2e3480-9129-4d0d-a605-77fda4e99a4a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19651"}}
	{"specversion":"1.0","id":"461d5bda-dc65-441c-b4e8-a4394d4286a2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"7f14aa65-d4d6-4ebc-bc21-dc1077b7cd3f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19651-3871/kubeconfig"}}
	{"specversion":"1.0","id":"167f4e21-8b0b-4829-b005-f0b32a989efc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3871/.minikube"}}
	{"specversion":"1.0","id":"a529c2e9-8b2e-44de-97c2-941eac90d835","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"d7c7137a-2817-4e99-a7f6-8f96f9155eba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"dc2a7734-8405-4a20-9ddf-f6bba7521f38","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-281961" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-281961
--- PASS: TestErrorJSONOutput (0.18s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (98.79s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-145257 --driver=kvm2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-145257 --driver=kvm2 : (46.279863083s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-156883 --driver=kvm2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-156883 --driver=kvm2 : (50.167294684s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-145257
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-156883
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-156883" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-156883
helpers_test.go:175: Cleaning up "first-145257" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-145257
--- PASS: TestMinikubeProfile (98.79s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (28.4s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-737475 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-737475 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 : (27.397051385s)
--- PASS: TestMountStart/serial/StartWithMountFirst (28.40s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-737475 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-737475 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.35s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (31.68s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-753699 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 
E0916 11:01:42.478143   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/functional-384697/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-753699 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 : (30.678175403s)
--- PASS: TestMountStart/serial/StartWithMountSecond (31.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-753699 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-753699 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-737475 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-753699 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-753699 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.44s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-753699
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-753699: (2.436448361s)
--- PASS: TestMountStart/serial/Stop (2.44s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (26.85s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-753699
E0916 11:02:22.666565   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/addons-855148/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-753699: (25.847006149s)
--- PASS: TestMountStart/serial/RestartStopped (26.85s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-753699 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-753699 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (122.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-971910 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 
E0916 11:03:05.544422   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/functional-384697/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-971910 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 : (2m1.665255734s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-971910 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (122.04s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-971910 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-971910 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-971910 -- rollout status deployment/busybox: (2.632081394s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-971910 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-971910 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-971910 -- exec busybox-7dff88458-954w4 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-971910 -- exec busybox-7dff88458-wmn2d -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-971910 -- exec busybox-7dff88458-954w4 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-971910 -- exec busybox-7dff88458-wmn2d -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-971910 -- exec busybox-7dff88458-954w4 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-971910 -- exec busybox-7dff88458-wmn2d -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.06s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-971910 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-971910 -- exec busybox-7dff88458-954w4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-971910 -- exec busybox-7dff88458-954w4 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-971910 -- exec busybox-7dff88458-wmn2d -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-971910 -- exec busybox-7dff88458-wmn2d -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.75s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (57.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-971910 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-971910 -v 3 --alsologtostderr: (56.600766431s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-971910 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (57.15s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-971910 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-971910 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-971910 cp testdata/cp-test.txt multinode-971910:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-971910 ssh -n multinode-971910 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-971910 cp multinode-971910:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2442792422/001/cp-test_multinode-971910.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-971910 ssh -n multinode-971910 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-971910 cp multinode-971910:/home/docker/cp-test.txt multinode-971910-m02:/home/docker/cp-test_multinode-971910_multinode-971910-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-971910 ssh -n multinode-971910 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-971910 ssh -n multinode-971910-m02 "sudo cat /home/docker/cp-test_multinode-971910_multinode-971910-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-971910 cp multinode-971910:/home/docker/cp-test.txt multinode-971910-m03:/home/docker/cp-test_multinode-971910_multinode-971910-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-971910 ssh -n multinode-971910 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-971910 ssh -n multinode-971910-m03 "sudo cat /home/docker/cp-test_multinode-971910_multinode-971910-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-971910 cp testdata/cp-test.txt multinode-971910-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-971910 ssh -n multinode-971910-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-971910 cp multinode-971910-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2442792422/001/cp-test_multinode-971910-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-971910 ssh -n multinode-971910-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-971910 cp multinode-971910-m02:/home/docker/cp-test.txt multinode-971910:/home/docker/cp-test_multinode-971910-m02_multinode-971910.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-971910 ssh -n multinode-971910-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-971910 ssh -n multinode-971910 "sudo cat /home/docker/cp-test_multinode-971910-m02_multinode-971910.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-971910 cp multinode-971910-m02:/home/docker/cp-test.txt multinode-971910-m03:/home/docker/cp-test_multinode-971910-m02_multinode-971910-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-971910 ssh -n multinode-971910-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-971910 ssh -n multinode-971910-m03 "sudo cat /home/docker/cp-test_multinode-971910-m02_multinode-971910-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-971910 cp testdata/cp-test.txt multinode-971910-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-971910 ssh -n multinode-971910-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-971910 cp multinode-971910-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2442792422/001/cp-test_multinode-971910-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-971910 ssh -n multinode-971910-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-971910 cp multinode-971910-m03:/home/docker/cp-test.txt multinode-971910:/home/docker/cp-test_multinode-971910-m03_multinode-971910.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-971910 ssh -n multinode-971910-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-971910 ssh -n multinode-971910 "sudo cat /home/docker/cp-test_multinode-971910-m03_multinode-971910.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-971910 cp multinode-971910-m03:/home/docker/cp-test.txt multinode-971910-m02:/home/docker/cp-test_multinode-971910-m03_multinode-971910-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-971910 ssh -n multinode-971910-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-971910 ssh -n multinode-971910-m02 "sudo cat /home/docker/cp-test_multinode-971910-m03_multinode-971910-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.91s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-971910 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-971910 node stop m03: (2.471588841s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-971910 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-971910 status: exit status 7 (403.739567ms)

                                                
                                                
-- stdout --
	multinode-971910
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-971910-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-971910-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-971910 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-971910 status --alsologtostderr: exit status 7 (417.816409ms)

                                                
                                                
-- stdout --
	multinode-971910
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-971910-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-971910-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 11:05:45.043916   37729 out.go:345] Setting OutFile to fd 1 ...
	I0916 11:05:45.044040   37729 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:05:45.044051   37729 out.go:358] Setting ErrFile to fd 2...
	I0916 11:05:45.044058   37729 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:05:45.044248   37729 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3871/.minikube/bin
	I0916 11:05:45.044401   37729 out.go:352] Setting JSON to false
	I0916 11:05:45.044428   37729 mustload.go:65] Loading cluster: multinode-971910
	I0916 11:05:45.044471   37729 notify.go:220] Checking for updates...
	I0916 11:05:45.044971   37729 config.go:182] Loaded profile config "multinode-971910": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 11:05:45.044991   37729 status.go:255] checking status of multinode-971910 ...
	I0916 11:05:45.045546   37729 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 11:05:45.045589   37729 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 11:05:45.066345   37729 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34945
	I0916 11:05:45.066801   37729 main.go:141] libmachine: () Calling .GetVersion
	I0916 11:05:45.067376   37729 main.go:141] libmachine: Using API Version  1
	I0916 11:05:45.067397   37729 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 11:05:45.067867   37729 main.go:141] libmachine: () Calling .GetMachineName
	I0916 11:05:45.068109   37729 main.go:141] libmachine: (multinode-971910) Calling .GetState
	I0916 11:05:45.069823   37729 status.go:330] multinode-971910 host status = "Running" (err=<nil>)
	I0916 11:05:45.069838   37729 host.go:66] Checking if "multinode-971910" exists ...
	I0916 11:05:45.070146   37729 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 11:05:45.070188   37729 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 11:05:45.085166   37729 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37833
	I0916 11:05:45.085619   37729 main.go:141] libmachine: () Calling .GetVersion
	I0916 11:05:45.086071   37729 main.go:141] libmachine: Using API Version  1
	I0916 11:05:45.086097   37729 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 11:05:45.086425   37729 main.go:141] libmachine: () Calling .GetMachineName
	I0916 11:05:45.086591   37729 main.go:141] libmachine: (multinode-971910) Calling .GetIP
	I0916 11:05:45.089496   37729 main.go:141] libmachine: (multinode-971910) DBG | domain multinode-971910 has defined MAC address 52:54:00:d0:39:28 in network mk-multinode-971910
	I0916 11:05:45.089918   37729 main.go:141] libmachine: (multinode-971910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:39:28", ip: ""} in network mk-multinode-971910: {Iface:virbr1 ExpiryTime:2024-09-16 12:02:44 +0000 UTC Type:0 Mac:52:54:00:d0:39:28 Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:multinode-971910 Clientid:01:52:54:00:d0:39:28}
	I0916 11:05:45.089941   37729 main.go:141] libmachine: (multinode-971910) DBG | domain multinode-971910 has defined IP address 192.168.39.143 and MAC address 52:54:00:d0:39:28 in network mk-multinode-971910
	I0916 11:05:45.090084   37729 host.go:66] Checking if "multinode-971910" exists ...
	I0916 11:05:45.090407   37729 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 11:05:45.090448   37729 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 11:05:45.105490   37729 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34531
	I0916 11:05:45.105952   37729 main.go:141] libmachine: () Calling .GetVersion
	I0916 11:05:45.106451   37729 main.go:141] libmachine: Using API Version  1
	I0916 11:05:45.106470   37729 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 11:05:45.106807   37729 main.go:141] libmachine: () Calling .GetMachineName
	I0916 11:05:45.106980   37729 main.go:141] libmachine: (multinode-971910) Calling .DriverName
	I0916 11:05:45.107140   37729 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 11:05:45.107159   37729 main.go:141] libmachine: (multinode-971910) Calling .GetSSHHostname
	I0916 11:05:45.109961   37729 main.go:141] libmachine: (multinode-971910) DBG | domain multinode-971910 has defined MAC address 52:54:00:d0:39:28 in network mk-multinode-971910
	I0916 11:05:45.110313   37729 main.go:141] libmachine: (multinode-971910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:39:28", ip: ""} in network mk-multinode-971910: {Iface:virbr1 ExpiryTime:2024-09-16 12:02:44 +0000 UTC Type:0 Mac:52:54:00:d0:39:28 Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:multinode-971910 Clientid:01:52:54:00:d0:39:28}
	I0916 11:05:45.110348   37729 main.go:141] libmachine: (multinode-971910) DBG | domain multinode-971910 has defined IP address 192.168.39.143 and MAC address 52:54:00:d0:39:28 in network mk-multinode-971910
	I0916 11:05:45.110440   37729 main.go:141] libmachine: (multinode-971910) Calling .GetSSHPort
	I0916 11:05:45.110593   37729 main.go:141] libmachine: (multinode-971910) Calling .GetSSHKeyPath
	I0916 11:05:45.110751   37729 main.go:141] libmachine: (multinode-971910) Calling .GetSSHUsername
	I0916 11:05:45.110873   37729 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3871/.minikube/machines/multinode-971910/id_rsa Username:docker}
	I0916 11:05:45.194311   37729 ssh_runner.go:195] Run: systemctl --version
	I0916 11:05:45.200077   37729 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 11:05:45.213973   37729 kubeconfig.go:125] found "multinode-971910" server: "https://192.168.39.143:8443"
	I0916 11:05:45.214011   37729 api_server.go:166] Checking apiserver status ...
	I0916 11:05:45.214051   37729 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 11:05:45.230319   37729 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1879/cgroup
	W0916 11:05:45.240807   37729 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1879/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0916 11:05:45.240876   37729 ssh_runner.go:195] Run: ls
	I0916 11:05:45.245264   37729 api_server.go:253] Checking apiserver healthz at https://192.168.39.143:8443/healthz ...
	I0916 11:05:45.249925   37729 api_server.go:279] https://192.168.39.143:8443/healthz returned 200:
	ok
	I0916 11:05:45.249949   37729 status.go:422] multinode-971910 apiserver status = Running (err=<nil>)
	I0916 11:05:45.249965   37729 status.go:257] multinode-971910 status: &{Name:multinode-971910 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 11:05:45.249980   37729 status.go:255] checking status of multinode-971910-m02 ...
	I0916 11:05:45.250263   37729 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 11:05:45.250294   37729 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 11:05:45.265141   37729 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38989
	I0916 11:05:45.265576   37729 main.go:141] libmachine: () Calling .GetVersion
	I0916 11:05:45.266014   37729 main.go:141] libmachine: Using API Version  1
	I0916 11:05:45.266034   37729 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 11:05:45.266304   37729 main.go:141] libmachine: () Calling .GetMachineName
	I0916 11:05:45.266495   37729 main.go:141] libmachine: (multinode-971910-m02) Calling .GetState
	I0916 11:05:45.267894   37729 status.go:330] multinode-971910-m02 host status = "Running" (err=<nil>)
	I0916 11:05:45.267913   37729 host.go:66] Checking if "multinode-971910-m02" exists ...
	I0916 11:05:45.268204   37729 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 11:05:45.268252   37729 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 11:05:45.284955   37729 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40831
	I0916 11:05:45.285406   37729 main.go:141] libmachine: () Calling .GetVersion
	I0916 11:05:45.285837   37729 main.go:141] libmachine: Using API Version  1
	I0916 11:05:45.285849   37729 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 11:05:45.286131   37729 main.go:141] libmachine: () Calling .GetMachineName
	I0916 11:05:45.286305   37729 main.go:141] libmachine: (multinode-971910-m02) Calling .GetIP
	I0916 11:05:45.288958   37729 main.go:141] libmachine: (multinode-971910-m02) DBG | domain multinode-971910-m02 has defined MAC address 52:54:00:a8:59:79 in network mk-multinode-971910
	I0916 11:05:45.289322   37729 main.go:141] libmachine: (multinode-971910-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:59:79", ip: ""} in network mk-multinode-971910: {Iface:virbr1 ExpiryTime:2024-09-16 12:03:53 +0000 UTC Type:0 Mac:52:54:00:a8:59:79 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:multinode-971910-m02 Clientid:01:52:54:00:a8:59:79}
	I0916 11:05:45.289361   37729 main.go:141] libmachine: (multinode-971910-m02) DBG | domain multinode-971910-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:a8:59:79 in network mk-multinode-971910
	I0916 11:05:45.289506   37729 host.go:66] Checking if "multinode-971910-m02" exists ...
	I0916 11:05:45.289804   37729 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 11:05:45.289837   37729 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 11:05:45.305923   37729 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45599
	I0916 11:05:45.306398   37729 main.go:141] libmachine: () Calling .GetVersion
	I0916 11:05:45.306852   37729 main.go:141] libmachine: Using API Version  1
	I0916 11:05:45.306873   37729 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 11:05:45.307173   37729 main.go:141] libmachine: () Calling .GetMachineName
	I0916 11:05:45.307351   37729 main.go:141] libmachine: (multinode-971910-m02) Calling .DriverName
	I0916 11:05:45.307524   37729 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 11:05:45.307555   37729 main.go:141] libmachine: (multinode-971910-m02) Calling .GetSSHHostname
	I0916 11:05:45.310018   37729 main.go:141] libmachine: (multinode-971910-m02) DBG | domain multinode-971910-m02 has defined MAC address 52:54:00:a8:59:79 in network mk-multinode-971910
	I0916 11:05:45.310432   37729 main.go:141] libmachine: (multinode-971910-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:59:79", ip: ""} in network mk-multinode-971910: {Iface:virbr1 ExpiryTime:2024-09-16 12:03:53 +0000 UTC Type:0 Mac:52:54:00:a8:59:79 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:multinode-971910-m02 Clientid:01:52:54:00:a8:59:79}
	I0916 11:05:45.310454   37729 main.go:141] libmachine: (multinode-971910-m02) DBG | domain multinode-971910-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:a8:59:79 in network mk-multinode-971910
	I0916 11:05:45.310609   37729 main.go:141] libmachine: (multinode-971910-m02) Calling .GetSSHPort
	I0916 11:05:45.310777   37729 main.go:141] libmachine: (multinode-971910-m02) Calling .GetSSHKeyPath
	I0916 11:05:45.310894   37729 main.go:141] libmachine: (multinode-971910-m02) Calling .GetSSHUsername
	I0916 11:05:45.311039   37729 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19651-3871/.minikube/machines/multinode-971910-m02/id_rsa Username:docker}
	I0916 11:05:45.389724   37729 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 11:05:45.402777   37729 status.go:257] multinode-971910-m02 status: &{Name:multinode-971910-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0916 11:05:45.402820   37729 status.go:255] checking status of multinode-971910-m03 ...
	I0916 11:05:45.403141   37729 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 11:05:45.403196   37729 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 11:05:45.418081   37729 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42181
	I0916 11:05:45.418453   37729 main.go:141] libmachine: () Calling .GetVersion
	I0916 11:05:45.418914   37729 main.go:141] libmachine: Using API Version  1
	I0916 11:05:45.418935   37729 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 11:05:45.419246   37729 main.go:141] libmachine: () Calling .GetMachineName
	I0916 11:05:45.419432   37729 main.go:141] libmachine: (multinode-971910-m03) Calling .GetState
	I0916 11:05:45.420837   37729 status.go:330] multinode-971910-m03 host status = "Stopped" (err=<nil>)
	I0916 11:05:45.420853   37729 status.go:343] host is not running, skipping remaining checks
	I0916 11:05:45.420860   37729 status.go:257] multinode-971910-m03 status: &{Name:multinode-971910-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.29s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (41.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-971910 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-971910 node start m03 -v=7 --alsologtostderr: (41.227445795s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-971910 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (41.82s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (189.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-971910
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-971910
E0916 11:06:42.478956   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/functional-384697/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-971910: (27.193997766s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-971910 --wait=true -v=8 --alsologtostderr
E0916 11:07:22.667536   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/addons-855148/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-971910 --wait=true -v=8 --alsologtostderr: (2m42.450192456s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-971910
--- PASS: TestMultiNode/serial/RestartKeepsNodes (189.73s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-971910 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-971910 node delete m03: (1.660489226s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-971910 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.16s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-971910 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-971910 stop: (24.738931226s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-971910 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-971910 status: exit status 7 (80.308149ms)

                                                
                                                
-- stdout --
	multinode-971910
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-971910-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-971910 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-971910 status --alsologtostderr: exit status 7 (81.139873ms)

                                                
                                                
-- stdout --
	multinode-971910
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-971910-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 11:10:03.997553   39534 out.go:345] Setting OutFile to fd 1 ...
	I0916 11:10:03.997797   39534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:10:03.997807   39534 out.go:358] Setting ErrFile to fd 2...
	I0916 11:10:03.997811   39534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:10:03.998017   39534 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3871/.minikube/bin
	I0916 11:10:03.998160   39534 out.go:352] Setting JSON to false
	I0916 11:10:03.998186   39534 mustload.go:65] Loading cluster: multinode-971910
	I0916 11:10:03.998230   39534 notify.go:220] Checking for updates...
	I0916 11:10:03.998528   39534 config.go:182] Loaded profile config "multinode-971910": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 11:10:03.998541   39534 status.go:255] checking status of multinode-971910 ...
	I0916 11:10:03.998908   39534 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 11:10:03.998950   39534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 11:10:04.017901   39534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41717
	I0916 11:10:04.018344   39534 main.go:141] libmachine: () Calling .GetVersion
	I0916 11:10:04.018852   39534 main.go:141] libmachine: Using API Version  1
	I0916 11:10:04.018870   39534 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 11:10:04.019170   39534 main.go:141] libmachine: () Calling .GetMachineName
	I0916 11:10:04.019363   39534 main.go:141] libmachine: (multinode-971910) Calling .GetState
	I0916 11:10:04.020734   39534 status.go:330] multinode-971910 host status = "Stopped" (err=<nil>)
	I0916 11:10:04.020744   39534 status.go:343] host is not running, skipping remaining checks
	I0916 11:10:04.020749   39534 status.go:257] multinode-971910 status: &{Name:multinode-971910 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 11:10:04.020770   39534 status.go:255] checking status of multinode-971910-m02 ...
	I0916 11:10:04.021062   39534 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 11:10:04.021093   39534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 11:10:04.035040   39534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35895
	I0916 11:10:04.035422   39534 main.go:141] libmachine: () Calling .GetVersion
	I0916 11:10:04.035833   39534 main.go:141] libmachine: Using API Version  1
	I0916 11:10:04.035850   39534 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 11:10:04.036146   39534 main.go:141] libmachine: () Calling .GetMachineName
	I0916 11:10:04.036294   39534 main.go:141] libmachine: (multinode-971910-m02) Calling .GetState
	I0916 11:10:04.037810   39534 status.go:330] multinode-971910-m02 host status = "Stopped" (err=<nil>)
	I0916 11:10:04.037823   39534 status.go:343] host is not running, skipping remaining checks
	I0916 11:10:04.037830   39534 status.go:257] multinode-971910-m02 status: &{Name:multinode-971910-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.90s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (197.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-971910 --wait=true -v=8 --alsologtostderr --driver=kvm2 
E0916 11:11:42.478735   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/functional-384697/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:12:22.667094   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/addons-855148/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-971910 --wait=true -v=8 --alsologtostderr --driver=kvm2 : (3m17.364962399s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-971910 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (197.86s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (46.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-971910
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-971910-m02 --driver=kvm2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-971910-m02 --driver=kvm2 : exit status 14 (57.251094ms)

                                                
                                                
-- stdout --
	* [multinode-971910-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19651
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19651-3871/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3871/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-971910-m02' is duplicated with machine name 'multinode-971910-m02' in profile 'multinode-971910'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-971910-m03 --driver=kvm2 
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-971910-m03 --driver=kvm2 : (45.128622964s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-971910
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-971910: exit status 80 (198.947114ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-971910 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-971910-m03 already exists in multinode-971910-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-971910-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (46.19s)

                                                
                                    
x
+
TestPreload (187.05s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-128345 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4
E0916 11:15:25.734085   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/addons-855148/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-128345 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4: (1m59.974940028s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-128345 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-128345 image pull gcr.io/k8s-minikube/busybox: (1.411940798s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-128345
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-128345: (12.483849197s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-128345 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 
E0916 11:16:42.478928   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/functional-384697/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-128345 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 : (52.145007255s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-128345 image list
helpers_test.go:175: Cleaning up "test-preload-128345" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-128345
--- PASS: TestPreload (187.05s)

                                                
                                    
x
+
TestScheduledStopUnix (121.5s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-296915 --memory=2048 --driver=kvm2 
E0916 11:17:22.667328   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/addons-855148/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-296915 --memory=2048 --driver=kvm2 : (50.012454034s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-296915 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-296915 -n scheduled-stop-296915
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-296915 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-296915 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-296915 -n scheduled-stop-296915
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-296915
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-296915 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-296915
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-296915: exit status 7 (60.230718ms)

                                                
                                                
-- stdout --
	scheduled-stop-296915
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-296915 -n scheduled-stop-296915
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-296915 -n scheduled-stop-296915: exit status 7 (63.34932ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-296915" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-296915
--- PASS: TestScheduledStopUnix (121.50s)

                                                
                                    
x
+
TestSkaffold (122.33s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe386657983 version
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-413609 --memory=2600 --driver=kvm2 
E0916 11:19:45.546434   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/functional-384697/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-413609 --memory=2600 --driver=kvm2 : (46.179480779s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/KVM_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe386657983 run --minikube-profile skaffold-413609 --kube-context skaffold-413609 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe386657983 run --minikube-profile skaffold-413609 --kube-context skaffold-413609 --status-check=true --port-forward=false --interactive=false: (1m3.071086198s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-7fb55698df-v8t5m" [6e73d073-750a-4094-81fc-e22409c26b8f] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.003358063s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-687649d4b-g646f" [50467988-f52d-4673-96e4-c776087f10b5] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.003431173s
helpers_test.go:175: Cleaning up "skaffold-413609" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-413609
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-413609: (1.134211415s)
--- PASS: TestSkaffold (122.33s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (184.2s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.24495332 start -p running-upgrade-549785 --memory=2200 --vm-driver=kvm2 
E0916 11:22:22.667540   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/addons-855148/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.24495332 start -p running-upgrade-549785 --memory=2200 --vm-driver=kvm2 : (1m43.238380897s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-549785 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-549785 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (1m19.426197515s)
helpers_test.go:175: Cleaning up "running-upgrade-549785" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-549785
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-549785: (1.136516192s)
--- PASS: TestRunningBinaryUpgrade (184.20s)

                                                
                                    
x
+
TestKubernetesUpgrade (225.66s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-674565 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-674565 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2 : (1m46.802249001s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-674565
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-674565: (3.845146894s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-674565 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-674565 status --format={{.Host}}: exit status 7 (69.081015ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-674565 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-674565 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2 : (56.267573376s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-674565 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-674565 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2 
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-674565 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2 : exit status 106 (80.432596ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-674565] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19651
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19651-3871/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3871/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-674565
	    minikube start -p kubernetes-upgrade-674565 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6745652 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-674565 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-674565 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-674565 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2 : (57.578321078s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-674565" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-674565
--- PASS: TestKubernetesUpgrade (225.66s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.52s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.52s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (215.53s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.828409003 start -p stopped-upgrade-874902 --memory=2200 --vm-driver=kvm2 
E0916 11:21:42.478448   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/functional-384697/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.828409003 start -p stopped-upgrade-874902 --memory=2200 --vm-driver=kvm2 : (2m7.673302309s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.828409003 -p stopped-upgrade-874902 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.828409003 -p stopped-upgrade-874902 stop: (12.341920963s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-874902 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-874902 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (1m15.51179852s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (215.53s)

                                                
                                    
x
+
TestPause/serial/Start (128.51s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-224354 --memory=2048 --install-addons=false --wait=all --driver=kvm2 
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-224354 --memory=2048 --install-addons=false --wait=all --driver=kvm2 : (2m8.506163192s)
--- PASS: TestPause/serial/Start (128.51s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.33s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-874902
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-874902: (1.332324039s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-091131 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-091131 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 : exit status 14 (68.496736ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-091131] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19651
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19651-3871/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3871/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-091131 --driver=kvm2 
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-091131 --driver=kvm2 : (48.75992206s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-091131 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (49.00s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (98.29s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-224354 --alsologtostderr -v=1 --driver=kvm2 
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-224354 --alsologtostderr -v=1 --driver=kvm2 : (1m38.259719636s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (98.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (47.8s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-091131 --no-kubernetes --driver=kvm2 
E0916 11:26:08.127451   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/skaffold-413609/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:26:08.133877   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/skaffold-413609/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:26:08.145273   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/skaffold-413609/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:26:08.166725   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/skaffold-413609/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:26:08.208195   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/skaffold-413609/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:26:08.289651   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/skaffold-413609/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:26:08.451973   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/skaffold-413609/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:26:08.773266   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/skaffold-413609/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:26:09.415563   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/skaffold-413609/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:26:10.697515   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/skaffold-413609/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:26:13.259490   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/skaffold-413609/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:26:18.381719   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/skaffold-413609/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-091131 --no-kubernetes --driver=kvm2 : (46.533676815s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-091131 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-091131 status -o json: exit status 2 (243.451651ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-091131","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-091131
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-091131: (1.025822489s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (47.80s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (47.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-091131 --no-kubernetes --driver=kvm2 
E0916 11:26:42.478320   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/functional-384697/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-091131 --no-kubernetes --driver=kvm2 : (47.036539018s)
--- PASS: TestNoKubernetes/serial/Start (47.04s)

                                                
                                    
x
+
TestPause/serial/Pause (0.62s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-224354 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.62s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.25s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-224354 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-224354 --output=json --layout=cluster: exit status 2 (250.621261ms)

                                                
                                                
-- stdout --
	{"Name":"pause-224354","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-224354","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.25s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.57s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-224354 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.57s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.63s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-224354 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.63s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.98s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-224354 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.98s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.37s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-091131 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-091131 "sudo systemctl is-active --quiet service kubelet": exit status 1 (181.765687ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-091131
E0916 11:27:30.068068   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/skaffold-413609/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-091131: (3.27916085s)
--- PASS: TestNoKubernetes/serial/Stop (3.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (69.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-091131 --driver=kvm2 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-091131 --driver=kvm2 : (1m9.339127226s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (69.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-091131 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-091131 "sudo systemctl is-active --quiet service kubelet": exit status 1 (203.110708ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (88.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-997621 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-997621 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 : (1m28.268841253s)
--- PASS: TestNetworkPlugins/group/auto/Start (88.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (94.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-997621 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-997621 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 : (1m34.141397168s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (94.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-997621 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-997621 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-szgnx" [515a12d3-5fa6-4f8a-a3fa-987971af8759] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-szgnx" [515a12d3-5fa6-4f8a-a3fa-987971af8759] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.00447397s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (87.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-997621 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-997621 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 : (1m27.487627092s)
--- PASS: TestNetworkPlugins/group/calico/Start (87.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-l7dkd" [1f01c975-3deb-4f71-8cfe-05d772c7c032] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004369546s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-997621 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-997621 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-997621 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (93.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-997621 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-997621 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 : (1m33.237396757s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (93.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-997621 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-997621 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-cs4kt" [c6e7052c-cedc-41f2-9be8-a593646799b6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-cs4kt" [c6e7052c-cedc-41f2-9be8-a593646799b6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.00422672s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (100.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-997621 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-997621 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 : (1m40.150405375s)
--- PASS: TestNetworkPlugins/group/false/Start (100.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-997621 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-997621 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-997621 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (137.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-997621 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 
E0916 11:31:08.127175   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/skaffold-413609/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:31:35.831900   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/skaffold-413609/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-997621 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 : (2m17.927994046s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (137.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-7nsvg" [ebb961ea-9fa1-4998-973b-1c3900d7d546] Running
E0916 11:31:42.478727   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/functional-384697/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.007800466s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-997621 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (14.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-997621 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-596mh" [e936b52e-3b5f-4335-9668-9872a285ac4f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-596mh" [e936b52e-3b5f-4335-9668-9872a285ac4f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 14.004543279s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (14.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-997621 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-997621 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-529w6" [fac7ced4-775f-4528-9b0b-d58c26fe0c8a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-529w6" [fac7ced4-775f-4528-9b0b-d58c26fe0c8a] Running
E0916 11:32:05.736023   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/addons-855148/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.00500556s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-997621 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-997621 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-997621 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-997621 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-997621 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-997621 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-997621 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (11.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-997621 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-st8jf" [eaef270b-4880-4f19-a9f2-e2fb2792e707] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-st8jf" [eaef270b-4880-4f19-a9f2-e2fb2792e707] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 11.003896619s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (11.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (69.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-997621 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 
E0916 11:32:22.666689   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/addons-855148/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-997621 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 : (1m9.618630973s)
--- PASS: TestNetworkPlugins/group/flannel/Start (69.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (123.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-997621 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-997621 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 : (2m3.419811781s)
--- PASS: TestNetworkPlugins/group/bridge/Start (123.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (21.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-997621 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context false-997621 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.170588377s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context false-997621 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Done: kubectl --context false-997621 exec deployment/netcat -- nslookup kubernetes.default: (5.179402311s)
--- PASS: TestNetworkPlugins/group/false/DNS (21.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-997621 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-997621 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (102.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-997621 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-997621 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 : (1m42.144845721s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (102.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-997621 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-997621 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-nzjkl" [75300733-6785-449e-aafd-e60109c842db] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-nzjkl" [75300733-6785-449e-aafd-e60109c842db] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.00794968s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-997621 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-997621 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-997621 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-8n9pb" [0504d454-ecbc-4880-a082-5a45871ba168] Running
E0916 11:33:33.125884   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/gvisor-413772/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:33:33.132323   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/gvisor-413772/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:33:33.143752   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/gvisor-413772/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:33:33.165474   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/gvisor-413772/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:33:33.206884   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/gvisor-413772/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:33:33.288256   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/gvisor-413772/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:33:33.449826   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/gvisor-413772/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:33:33.771741   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/gvisor-413772/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:33:34.413488   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/gvisor-413772/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003949493s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-997621 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-997621 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-gk69f" [e8e0be25-4f9c-46bd-a930-63dd2d81457c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0916 11:33:35.695009   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/gvisor-413772/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:33:38.256755   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/gvisor-413772/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-gk69f" [e8e0be25-4f9c-46bd-a930-63dd2d81457c] Running
E0916 11:33:43.378688   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/gvisor-413772/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.006292804s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (144.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-857746 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-857746 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0: (2m24.378029691s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (144.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-997621 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-997621 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-997621 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (103.46s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-750749 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.31.1
E0916 11:34:14.103109   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/gvisor-413772/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-750749 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.31.1: (1m43.461456299s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (103.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-997621 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-997621 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-g6bcl" [7ccba352-8f21-40de-a2f4-6655b0939cf7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-g6bcl" [7ccba352-8f21-40de-a2f4-6655b0939cf7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.004378618s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-997621 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-997621 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-997621 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-997621 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (12.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-997621 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-lfszh" [7641d189-0532-4455-82e3-23e519b43347] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-lfszh" [7641d189-0532-4455-82e3-23e519b43347] Running
E0916 11:34:55.065184   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/gvisor-413772/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 12.004526274s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (12.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-997621 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-997621 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-997621 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.17s)
E0916 11:41:59.132664   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/custom-flannel-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:42:07.775843   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/calico-997621/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (65.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-942513 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-942513 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.31.1: (1m5.103266345s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (65.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (73.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-318774 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.31.1
E0916 11:35:20.494061   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/auto-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:35:21.204770   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/kindnet-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:35:21.211150   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/kindnet-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:35:21.222511   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/kindnet-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:35:21.244185   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/kindnet-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:35:21.285642   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/kindnet-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:35:21.367167   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/kindnet-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:35:21.528702   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/kindnet-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:35:21.850852   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/kindnet-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:35:22.492427   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/kindnet-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:35:23.774262   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/kindnet-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:35:26.336457   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/kindnet-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:35:30.735436   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/auto-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:35:31.458420   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/kindnet-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:35:41.700658   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/kindnet-997621/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-318774 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.31.1: (1m13.39446904s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (73.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-750749 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1eb38e30-5779-47f0-a2a7-b59642e80fd8] Pending
helpers_test.go:344: "busybox" [1eb38e30-5779-47f0-a2a7-b59642e80fd8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0916 11:35:51.217433   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/auto-997621/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [1eb38e30-5779-47f0-a2a7-b59642e80fd8] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.005477192s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-750749 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-750749 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-750749 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.183745513s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-750749 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (13.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-750749 --alsologtostderr -v=3
E0916 11:36:02.182496   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/kindnet-997621/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-750749 --alsologtostderr -v=3: (13.368108787s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (13.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-857746 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b2380871-87d5-4c70-8fbb-6bf9b10b9e18] Pending
helpers_test.go:344: "busybox" [b2380871-87d5-4c70-8fbb-6bf9b10b9e18] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [b2380871-87d5-4c70-8fbb-6bf9b10b9e18] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.004992477s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-857746 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-942513 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d0ed6399-2f01-4880-96b0-3d04b7e58e16] Pending
helpers_test.go:344: "busybox" [d0ed6399-2f01-4880-96b0-3d04b7e58e16] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0916 11:36:08.126623   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/skaffold-413609/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [d0ed6399-2f01-4880-96b0-3d04b7e58e16] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.004709833s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-942513 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-750749 -n no-preload-750749
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-750749 -n no-preload-750749: exit status 7 (75.937657ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-750749 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (296.76s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-750749 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-750749 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.31.1: (4m56.510033572s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-750749 -n no-preload-750749
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (296.76s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-942513 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-942513 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-857746 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-857746 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (13.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-942513 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-942513 --alsologtostderr -v=3: (13.419537556s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (13.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-857746 --alsologtostderr -v=3
E0916 11:36:16.987377   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/gvisor-413772/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:36:25.548270   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/functional-384697/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-857746 --alsologtostderr -v=3: (13.377248498s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-942513 -n embed-certs-942513
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-942513 -n embed-certs-942513: exit status 7 (74.477114ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-942513 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (303.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-942513 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-942513 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.31.1: (5m2.715697495s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-942513 -n embed-certs-942513
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (303.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-857746 -n old-k8s-version-857746
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-857746 -n old-k8s-version-857746: exit status 7 (65.043203ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-857746 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (426.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-857746 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-857746 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0: (7m5.930646509s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-857746 -n old-k8s-version-857746
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (426.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-318774 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [aa1b6221-ae28-4efa-9090-1570f61f3af2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0916 11:36:32.179149   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/auto-997621/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [aa1b6221-ae28-4efa-9090-1570f61f3af2] Running
E0916 11:36:40.073300   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/calico-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:36:40.079752   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/calico-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:36:40.091189   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/calico-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:36:40.112597   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/calico-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:36:40.154042   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/calico-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:36:40.236120   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/calico-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:36:40.398187   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/calico-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:36:40.719924   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/calico-997621/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.004698777s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-318774 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-318774 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0916 11:36:41.361717   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/calico-997621/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-318774 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.67s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-318774 --alsologtostderr -v=3
E0916 11:36:42.478498   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/functional-384697/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:36:42.643969   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/calico-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:36:43.144109   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/kindnet-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:36:45.205389   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/calico-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:36:50.327588   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/calico-997621/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-318774 --alsologtostderr -v=3: (12.665964835s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.67s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-318774 -n default-k8s-diff-port-318774
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-318774 -n default-k8s-diff-port-318774: exit status 7 (62.742336ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-318774 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (316.75s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-318774 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.31.1
E0916 11:36:59.132878   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/custom-flannel-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:36:59.139344   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/custom-flannel-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:36:59.150713   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/custom-flannel-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:36:59.172197   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/custom-flannel-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:36:59.213955   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/custom-flannel-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:36:59.295481   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/custom-flannel-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:36:59.457495   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/custom-flannel-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:36:59.779079   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/custom-flannel-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:37:00.420861   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/custom-flannel-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:37:00.569479   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/calico-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:37:01.703098   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/custom-flannel-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:37:04.265035   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/custom-flannel-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:37:09.387049   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/custom-flannel-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:37:17.691389   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/false-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:37:17.697772   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/false-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:37:17.709117   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/false-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:37:17.731156   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/false-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:37:17.772568   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/false-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:37:17.853970   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/false-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:37:18.015596   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/false-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:37:18.337304   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/false-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:37:18.978866   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/false-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:37:19.628574   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/custom-flannel-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:37:20.260523   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/false-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:37:21.051087   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/calico-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:37:22.667118   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/addons-855148/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:37:22.821756   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/false-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:37:27.943446   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/false-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:37:38.184932   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/false-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:37:40.110683   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/custom-flannel-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:37:54.100902   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/auto-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:37:58.667000   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/false-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:38:02.012418   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/calico-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:38:05.065516   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/kindnet-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:38:12.809710   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/enable-default-cni-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:38:12.816088   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/enable-default-cni-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:38:12.827397   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/enable-default-cni-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:38:12.848739   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/enable-default-cni-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:38:12.890186   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/enable-default-cni-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:38:12.971654   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/enable-default-cni-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:38:13.133383   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/enable-default-cni-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:38:13.455542   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/enable-default-cni-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:38:14.097209   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/enable-default-cni-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:38:15.378958   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/enable-default-cni-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:38:17.940828   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/enable-default-cni-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:38:21.072027   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/custom-flannel-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:38:23.062628   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/enable-default-cni-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:38:29.108256   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/flannel-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:38:29.114651   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/flannel-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:38:29.126048   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/flannel-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:38:29.147672   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/flannel-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:38:29.189069   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/flannel-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:38:29.270464   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/flannel-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:38:29.432038   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/flannel-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:38:29.753786   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/flannel-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:38:30.395856   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/flannel-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:38:31.677913   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/flannel-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:38:33.126499   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/gvisor-413772/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:38:33.303973   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/enable-default-cni-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:38:34.239643   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/flannel-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:38:39.361390   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/flannel-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:38:39.628543   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/false-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:38:49.603518   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/flannel-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:38:53.785362   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/enable-default-cni-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:39:00.829306   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/gvisor-413772/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:39:10.085330   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/flannel-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:39:23.934472   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/calico-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:39:32.150067   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/bridge-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:39:32.157302   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/bridge-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:39:32.168664   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/bridge-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:39:32.190049   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/bridge-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:39:32.231423   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/bridge-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:39:32.312841   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/bridge-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:39:32.474396   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/bridge-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:39:32.796038   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/bridge-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:39:33.437323   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/bridge-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:39:34.719222   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/bridge-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:39:34.747664   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/enable-default-cni-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:39:37.280797   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/bridge-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:39:42.402847   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/bridge-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:39:42.993759   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/custom-flannel-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:39:48.380027   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/kubenet-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:39:48.386420   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/kubenet-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:39:48.397757   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/kubenet-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:39:48.419172   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/kubenet-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:39:48.460532   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/kubenet-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:39:48.541934   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/kubenet-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:39:48.703520   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/kubenet-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:39:49.025242   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/kubenet-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:39:49.667362   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/kubenet-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:39:50.949289   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/kubenet-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:39:51.046713   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/flannel-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:39:52.645061   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/bridge-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:39:53.511442   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/kubenet-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:39:58.633121   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/kubenet-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:40:01.550671   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/false-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:40:08.875097   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/kubenet-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:40:10.239129   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/auto-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:40:13.127324   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/bridge-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:40:21.204046   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/kindnet-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:40:29.356865   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/kubenet-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:40:37.942991   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/auto-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:40:48.907099   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/kindnet-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:40:54.088790   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/bridge-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:40:56.669109   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/enable-default-cni-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:41:08.126413   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/skaffold-413609/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-318774 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.31.1: (5m16.505784276s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-318774 -n default-k8s-diff-port-318774
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (316.75s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-ldnbb" [bbd576c6-8798-4630-8ae7-05d341bc83e5] Running
E0916 11:41:10.319098   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/kubenet-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:41:12.968059   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/flannel-997621/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003748051s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-ldnbb" [bbd576c6-8798-4630-8ae7-05d341bc83e5] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004450586s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-750749 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-750749 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.54s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-750749 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-750749 -n no-preload-750749
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-750749 -n no-preload-750749: exit status 2 (272.968073ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-750749 -n no-preload-750749
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-750749 -n no-preload-750749: exit status 2 (282.197292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-750749 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-750749 -n no-preload-750749
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-750749 -n no-preload-750749
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.54s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (61.61s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-805902 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-805902 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.31.1: (1m1.605561829s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (61.61s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-lpgbp" [f6adcd17-950b-4cf3-9fe5-e356d5ae4b7a] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00386146s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-lpgbp" [f6adcd17-950b-4cf3-9fe5-e356d5ae4b7a] Running
E0916 11:41:40.073858   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/calico-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:41:42.478921   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/functional-384697/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004250046s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-942513 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-942513 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-942513 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-942513 -n embed-certs-942513
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-942513 -n embed-certs-942513: exit status 2 (228.27391ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-942513 -n embed-certs-942513
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-942513 -n embed-certs-942513: exit status 2 (227.817015ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-942513 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-942513 -n embed-certs-942513
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-942513 -n embed-certs-942513
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-bktcv" [5f8221cf-8da6-457f-ae2d-98933cb911be] Running
E0916 11:42:16.010851   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/bridge-997621/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004645742s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-bktcv" [5f8221cf-8da6-457f-ae2d-98933cb911be] Running
E0916 11:42:17.692182   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/false-997621/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004316784s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-318774 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
E0916 11:42:22.666950   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/addons-855148/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-318774 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.41s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-318774 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-318774 -n default-k8s-diff-port-318774
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-318774 -n default-k8s-diff-port-318774: exit status 2 (229.97669ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-318774 -n default-k8s-diff-port-318774
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-318774 -n default-k8s-diff-port-318774: exit status 2 (235.636897ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-318774 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-318774 -n default-k8s-diff-port-318774
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-318774 -n default-k8s-diff-port-318774
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.41s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.95s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-805902 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.95s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (12.61s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-805902 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-805902 --alsologtostderr -v=3: (12.613807828s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (12.61s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-805902 -n newest-cni-805902
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-805902 -n newest-cni-805902: exit status 7 (63.619608ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-805902 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (35.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-805902 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.31.1
E0916 11:42:45.392727   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/false-997621/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:43:12.809834   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/enable-default-cni-997621/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-805902 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.31.1: (35.001219516s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-805902 -n newest-cni-805902
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (35.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-805902 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-805902 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-805902 -n newest-cni-805902
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-805902 -n newest-cni-805902: exit status 2 (228.39373ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-805902 -n newest-cni-805902
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-805902 -n newest-cni-805902: exit status 2 (224.520797ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-805902 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-805902 -n newest-cni-805902
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-805902 -n newest-cni-805902
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-9jqz8" [b2e9f513-6467-4fda-86a8-b549ad10f572] Running
E0916 11:43:40.511326   12041 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3871/.minikube/profiles/enable-default-cni-997621/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003918962s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-9jqz8" [b2e9f513-6467-4fda-86a8-b549ad10f572] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003516693s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-857746 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-857746 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-857746 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-857746 -n old-k8s-version-857746
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-857746 -n old-k8s-version-857746: exit status 2 (222.037058ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-857746 -n old-k8s-version-857746
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-857746 -n old-k8s-version-857746: exit status 2 (225.150473ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-857746 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-857746 -n old-k8s-version-857746
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-857746 -n old-k8s-version-857746
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.09s)

                                                
                                    

Test skip (31/341)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-997621 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-997621

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-997621

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-997621

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-997621

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-997621

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-997621

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-997621

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-997621

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-997621

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-997621

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-997621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997621"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-997621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997621"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-997621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997621"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-997621

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-997621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997621"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-997621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997621"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-997621" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-997621" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-997621" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-997621" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-997621" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-997621" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-997621" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-997621" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-997621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997621"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-997621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997621"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-997621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997621"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-997621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997621"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-997621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997621"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-997621

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-997621

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-997621" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-997621" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-997621

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-997621

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-997621" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-997621" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-997621" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-997621" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-997621" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-997621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997621"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-997621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997621"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-997621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997621"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-997621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997621"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-997621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997621"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-997621

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-997621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997621"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-997621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997621"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-997621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997621"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-997621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997621"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-997621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997621"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-997621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997621"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-997621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997621"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-997621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997621"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-997621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997621"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-997621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997621"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-997621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997621"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-997621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997621"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-997621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997621"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-997621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997621"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-997621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997621"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-997621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997621"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-997621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997621"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-997621" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997621"

                                                
                                                
----------------------- debugLogs end: cilium-997621 [took: 2.832675171s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-997621" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-997621
--- SKIP: TestNetworkPlugins/group/cilium (2.97s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-094911" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-094911
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
Copied to clipboard