Test Report: KVM_Linux 19531

                    
                      cca1ca437c91fbc205ce13fbbdef95295053f0ce:2024-08-29:35997
                    
                

Test fail (1/341)

Order failed test Duration
33 TestAddons/parallel/Registry 72.73
x
+
TestAddons/parallel/Registry (72.73s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 3.58692ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6fb4cdfc84-776mp" [3de6ff17-cf9f-4375-8344-461862b48005] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.005208789s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-2d74h" [0872ecae-1d14-4b03-b5a7-3bed9bab8b7a] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004160856s
addons_test.go:342: (dbg) Run:  kubectl --context addons-661794 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-661794 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-661794 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.081437008s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-661794 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-661794 ip
2024/08/29 18:19:06 [DEBUG] GET http://192.168.39.206:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-661794 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-661794 -n addons-661794
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-661794 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-661794 logs -n 25: (1.072364479s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-398580                                                                     | download-only-398580 | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC | 29 Aug 24 18:05 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-683228 | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC |                     |
	|         | binary-mirror-683228                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:36491                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-683228                                                                     | binary-mirror-683228 | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC | 29 Aug 24 18:05 UTC |
	| addons  | enable dashboard -p                                                                         | addons-661794        | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC |                     |
	|         | addons-661794                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-661794        | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC |                     |
	|         | addons-661794                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-661794 --wait=true                                                                | addons-661794        | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC | 29 Aug 24 18:09 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2  --addons=ingress                                                             |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-661794 addons disable                                                                | addons-661794        | jenkins | v1.33.1 | 29 Aug 24 18:09 UTC | 29 Aug 24 18:09 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-661794 addons                                                                        | addons-661794        | jenkins | v1.33.1 | 29 Aug 24 18:18 UTC | 29 Aug 24 18:18 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-661794 ssh cat                                                                       | addons-661794        | jenkins | v1.33.1 | 29 Aug 24 18:18 UTC | 29 Aug 24 18:18 UTC |
	|         | /opt/local-path-provisioner/pvc-8f8191a6-b4a1-450c-abbd-925f066b3f23_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-661794 addons disable                                                                | addons-661794        | jenkins | v1.33.1 | 29 Aug 24 18:18 UTC | 29 Aug 24 18:18 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-661794 addons disable                                                                | addons-661794        | jenkins | v1.33.1 | 29 Aug 24 18:18 UTC | 29 Aug 24 18:18 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-661794        | jenkins | v1.33.1 | 29 Aug 24 18:18 UTC | 29 Aug 24 18:18 UTC |
	|         | -p addons-661794                                                                            |                      |         |         |                     |                     |
	| addons  | addons-661794 addons                                                                        | addons-661794        | jenkins | v1.33.1 | 29 Aug 24 18:18 UTC | 29 Aug 24 18:18 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-661794 addons                                                                        | addons-661794        | jenkins | v1.33.1 | 29 Aug 24 18:18 UTC | 29 Aug 24 18:18 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-661794 addons disable                                                                | addons-661794        | jenkins | v1.33.1 | 29 Aug 24 18:18 UTC | 29 Aug 24 18:18 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-661794        | jenkins | v1.33.1 | 29 Aug 24 18:18 UTC | 29 Aug 24 18:18 UTC |
	|         | -p addons-661794                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-661794        | jenkins | v1.33.1 | 29 Aug 24 18:18 UTC | 29 Aug 24 18:18 UTC |
	|         | addons-661794                                                                               |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-661794        | jenkins | v1.33.1 | 29 Aug 24 18:18 UTC | 29 Aug 24 18:18 UTC |
	|         | addons-661794                                                                               |                      |         |         |                     |                     |
	| addons  | addons-661794 addons disable                                                                | addons-661794        | jenkins | v1.33.1 | 29 Aug 24 18:18 UTC | 29 Aug 24 18:18 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-661794 ssh curl -s                                                                   | addons-661794        | jenkins | v1.33.1 | 29 Aug 24 18:19 UTC | 29 Aug 24 18:19 UTC |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ip      | addons-661794 ip                                                                            | addons-661794        | jenkins | v1.33.1 | 29 Aug 24 18:19 UTC | 29 Aug 24 18:19 UTC |
	| addons  | addons-661794 addons disable                                                                | addons-661794        | jenkins | v1.33.1 | 29 Aug 24 18:19 UTC | 29 Aug 24 18:19 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-661794 addons disable                                                                | addons-661794        | jenkins | v1.33.1 | 29 Aug 24 18:19 UTC |                     |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| ip      | addons-661794 ip                                                                            | addons-661794        | jenkins | v1.33.1 | 29 Aug 24 18:19 UTC | 29 Aug 24 18:19 UTC |
	| addons  | addons-661794 addons disable                                                                | addons-661794        | jenkins | v1.33.1 | 29 Aug 24 18:19 UTC | 29 Aug 24 18:19 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/29 18:05:33
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0829 18:05:33.567999   20885 out.go:345] Setting OutFile to fd 1 ...
	I0829 18:05:33.568238   20885 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:05:33.568246   20885 out.go:358] Setting ErrFile to fd 2...
	I0829 18:05:33.568250   20885 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:05:33.568408   20885 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-13071/.minikube/bin
	I0829 18:05:33.568971   20885 out.go:352] Setting JSON to false
	I0829 18:05:33.569745   20885 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":2882,"bootTime":1724951852,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 18:05:33.569800   20885 start.go:139] virtualization: kvm guest
	I0829 18:05:33.571603   20885 out.go:177] * [addons-661794] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0829 18:05:33.572798   20885 out.go:177]   - MINIKUBE_LOCATION=19531
	I0829 18:05:33.572834   20885 notify.go:220] Checking for updates...
	I0829 18:05:33.575130   20885 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 18:05:33.576113   20885 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19531-13071/kubeconfig
	I0829 18:05:33.577123   20885 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-13071/.minikube
	I0829 18:05:33.578206   20885 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0829 18:05:33.579304   20885 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 18:05:33.580355   20885 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 18:05:33.612483   20885 out.go:177] * Using the kvm2 driver based on user configuration
	I0829 18:05:33.613704   20885 start.go:297] selected driver: kvm2
	I0829 18:05:33.613724   20885 start.go:901] validating driver "kvm2" against <nil>
	I0829 18:05:33.613735   20885 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 18:05:33.614483   20885 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 18:05:33.614567   20885 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19531-13071/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0829 18:05:33.629862   20885 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0829 18:05:33.629914   20885 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0829 18:05:33.630185   20885 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 18:05:33.630217   20885 cni.go:84] Creating CNI manager for ""
	I0829 18:05:33.630232   20885 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0829 18:05:33.630242   20885 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0829 18:05:33.630299   20885 start.go:340] cluster config:
	{Name:addons-661794 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-661794 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:d
ocker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 18:05:33.630402   20885 iso.go:125] acquiring lock: {Name:mk111510bb887618e1358eefed89382b2a0d6da2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 18:05:33.632064   20885 out.go:177] * Starting "addons-661794" primary control-plane node in "addons-661794" cluster
	I0829 18:05:33.633306   20885 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0829 18:05:33.633341   20885 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19531-13071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0829 18:05:33.633348   20885 cache.go:56] Caching tarball of preloaded images
	I0829 18:05:33.633452   20885 preload.go:172] Found /home/jenkins/minikube-integration/19531-13071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0829 18:05:33.633466   20885 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0829 18:05:33.633758   20885 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/addons-661794/config.json ...
	I0829 18:05:33.633780   20885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/addons-661794/config.json: {Name:mk0d835189a9aa47bf09d4b84fbae619ec8c1f32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:05:33.633920   20885 start.go:360] acquireMachinesLock for addons-661794: {Name:mkad599bd02fdc37681da3e09b3fc9783d7a4ad9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 18:05:33.633976   20885 start.go:364] duration metric: took 40.424µs to acquireMachinesLock for "addons-661794"
	I0829 18:05:33.634028   20885 start.go:93] Provisioning new machine with config: &{Name:addons-661794 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:addons-661794 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0829 18:05:33.634082   20885 start.go:125] createHost starting for "" (driver="kvm2")
	I0829 18:05:33.635438   20885 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0829 18:05:33.635561   20885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0829 18:05:33.635607   20885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:05:33.649910   20885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42225
	I0829 18:05:33.650366   20885 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:05:33.650875   20885 main.go:141] libmachine: Using API Version  1
	I0829 18:05:33.650897   20885 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:05:33.651184   20885 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:05:33.651402   20885 main.go:141] libmachine: (addons-661794) Calling .GetMachineName
	I0829 18:05:33.651525   20885 main.go:141] libmachine: (addons-661794) Calling .DriverName
	I0829 18:05:33.651668   20885 start.go:159] libmachine.API.Create for "addons-661794" (driver="kvm2")
	I0829 18:05:33.651699   20885 client.go:168] LocalClient.Create starting
	I0829 18:05:33.651751   20885 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19531-13071/.minikube/certs/ca.pem
	I0829 18:05:33.882374   20885 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19531-13071/.minikube/certs/cert.pem
	I0829 18:05:34.011261   20885 main.go:141] libmachine: Running pre-create checks...
	I0829 18:05:34.011287   20885 main.go:141] libmachine: (addons-661794) Calling .PreCreateCheck
	I0829 18:05:34.011787   20885 main.go:141] libmachine: (addons-661794) Calling .GetConfigRaw
	I0829 18:05:34.012249   20885 main.go:141] libmachine: Creating machine...
	I0829 18:05:34.012266   20885 main.go:141] libmachine: (addons-661794) Calling .Create
	I0829 18:05:34.012397   20885 main.go:141] libmachine: (addons-661794) Creating KVM machine...
	I0829 18:05:34.013699   20885 main.go:141] libmachine: (addons-661794) DBG | found existing default KVM network
	I0829 18:05:34.014452   20885 main.go:141] libmachine: (addons-661794) DBG | I0829 18:05:34.014302   20907 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015330}
	I0829 18:05:34.014477   20885 main.go:141] libmachine: (addons-661794) DBG | created network xml: 
	I0829 18:05:34.014490   20885 main.go:141] libmachine: (addons-661794) DBG | <network>
	I0829 18:05:34.014498   20885 main.go:141] libmachine: (addons-661794) DBG |   <name>mk-addons-661794</name>
	I0829 18:05:34.014507   20885 main.go:141] libmachine: (addons-661794) DBG |   <dns enable='no'/>
	I0829 18:05:34.014514   20885 main.go:141] libmachine: (addons-661794) DBG |   
	I0829 18:05:34.014524   20885 main.go:141] libmachine: (addons-661794) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0829 18:05:34.014533   20885 main.go:141] libmachine: (addons-661794) DBG |     <dhcp>
	I0829 18:05:34.014542   20885 main.go:141] libmachine: (addons-661794) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0829 18:05:34.014551   20885 main.go:141] libmachine: (addons-661794) DBG |     </dhcp>
	I0829 18:05:34.014556   20885 main.go:141] libmachine: (addons-661794) DBG |   </ip>
	I0829 18:05:34.014560   20885 main.go:141] libmachine: (addons-661794) DBG |   
	I0829 18:05:34.014565   20885 main.go:141] libmachine: (addons-661794) DBG | </network>
	I0829 18:05:34.014569   20885 main.go:141] libmachine: (addons-661794) DBG | 
	I0829 18:05:34.019751   20885 main.go:141] libmachine: (addons-661794) DBG | trying to create private KVM network mk-addons-661794 192.168.39.0/24...
	I0829 18:05:34.085079   20885 main.go:141] libmachine: (addons-661794) DBG | private KVM network mk-addons-661794 192.168.39.0/24 created
	I0829 18:05:34.085106   20885 main.go:141] libmachine: (addons-661794) DBG | I0829 18:05:34.085058   20907 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19531-13071/.minikube
	I0829 18:05:34.085124   20885 main.go:141] libmachine: (addons-661794) Setting up store path in /home/jenkins/minikube-integration/19531-13071/.minikube/machines/addons-661794 ...
	I0829 18:05:34.085140   20885 main.go:141] libmachine: (addons-661794) Building disk image from file:///home/jenkins/minikube-integration/19531-13071/.minikube/cache/iso/amd64/minikube-v1.33.1-1724775098-19521-amd64.iso
	I0829 18:05:34.085218   20885 main.go:141] libmachine: (addons-661794) Downloading /home/jenkins/minikube-integration/19531-13071/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19531-13071/.minikube/cache/iso/amd64/minikube-v1.33.1-1724775098-19521-amd64.iso...
	I0829 18:05:34.337594   20885 main.go:141] libmachine: (addons-661794) DBG | I0829 18:05:34.337460   20907 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19531-13071/.minikube/machines/addons-661794/id_rsa...
	I0829 18:05:34.404494   20885 main.go:141] libmachine: (addons-661794) DBG | I0829 18:05:34.404373   20907 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19531-13071/.minikube/machines/addons-661794/addons-661794.rawdisk...
	I0829 18:05:34.404524   20885 main.go:141] libmachine: (addons-661794) DBG | Writing magic tar header
	I0829 18:05:34.404537   20885 main.go:141] libmachine: (addons-661794) DBG | Writing SSH key tar header
	I0829 18:05:34.404548   20885 main.go:141] libmachine: (addons-661794) DBG | I0829 18:05:34.404488   20907 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19531-13071/.minikube/machines/addons-661794 ...
	I0829 18:05:34.404564   20885 main.go:141] libmachine: (addons-661794) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19531-13071/.minikube/machines/addons-661794
	I0829 18:05:34.404617   20885 main.go:141] libmachine: (addons-661794) Setting executable bit set on /home/jenkins/minikube-integration/19531-13071/.minikube/machines/addons-661794 (perms=drwx------)
	I0829 18:05:34.404635   20885 main.go:141] libmachine: (addons-661794) Setting executable bit set on /home/jenkins/minikube-integration/19531-13071/.minikube/machines (perms=drwxr-xr-x)
	I0829 18:05:34.404645   20885 main.go:141] libmachine: (addons-661794) Setting executable bit set on /home/jenkins/minikube-integration/19531-13071/.minikube (perms=drwxr-xr-x)
	I0829 18:05:34.404654   20885 main.go:141] libmachine: (addons-661794) Setting executable bit set on /home/jenkins/minikube-integration/19531-13071 (perms=drwxrwxr-x)
	I0829 18:05:34.404666   20885 main.go:141] libmachine: (addons-661794) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0829 18:05:34.404676   20885 main.go:141] libmachine: (addons-661794) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0829 18:05:34.404691   20885 main.go:141] libmachine: (addons-661794) Creating domain...
	I0829 18:05:34.404715   20885 main.go:141] libmachine: (addons-661794) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19531-13071/.minikube/machines
	I0829 18:05:34.404733   20885 main.go:141] libmachine: (addons-661794) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19531-13071/.minikube
	I0829 18:05:34.404745   20885 main.go:141] libmachine: (addons-661794) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19531-13071
	I0829 18:05:34.404762   20885 main.go:141] libmachine: (addons-661794) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0829 18:05:34.404775   20885 main.go:141] libmachine: (addons-661794) DBG | Checking permissions on dir: /home/jenkins
	I0829 18:05:34.404783   20885 main.go:141] libmachine: (addons-661794) DBG | Checking permissions on dir: /home
	I0829 18:05:34.404790   20885 main.go:141] libmachine: (addons-661794) DBG | Skipping /home - not owner
	I0829 18:05:34.405730   20885 main.go:141] libmachine: (addons-661794) define libvirt domain using xml: 
	I0829 18:05:34.405755   20885 main.go:141] libmachine: (addons-661794) <domain type='kvm'>
	I0829 18:05:34.405763   20885 main.go:141] libmachine: (addons-661794)   <name>addons-661794</name>
	I0829 18:05:34.405770   20885 main.go:141] libmachine: (addons-661794)   <memory unit='MiB'>4000</memory>
	I0829 18:05:34.405780   20885 main.go:141] libmachine: (addons-661794)   <vcpu>2</vcpu>
	I0829 18:05:34.405787   20885 main.go:141] libmachine: (addons-661794)   <features>
	I0829 18:05:34.405796   20885 main.go:141] libmachine: (addons-661794)     <acpi/>
	I0829 18:05:34.405937   20885 main.go:141] libmachine: (addons-661794)     <apic/>
	I0829 18:05:34.405969   20885 main.go:141] libmachine: (addons-661794)     <pae/>
	I0829 18:05:34.406016   20885 main.go:141] libmachine: (addons-661794)     
	I0829 18:05:34.406041   20885 main.go:141] libmachine: (addons-661794)   </features>
	I0829 18:05:34.406056   20885 main.go:141] libmachine: (addons-661794)   <cpu mode='host-passthrough'>
	I0829 18:05:34.406064   20885 main.go:141] libmachine: (addons-661794)   
	I0829 18:05:34.406072   20885 main.go:141] libmachine: (addons-661794)   </cpu>
	I0829 18:05:34.406079   20885 main.go:141] libmachine: (addons-661794)   <os>
	I0829 18:05:34.406085   20885 main.go:141] libmachine: (addons-661794)     <type>hvm</type>
	I0829 18:05:34.406092   20885 main.go:141] libmachine: (addons-661794)     <boot dev='cdrom'/>
	I0829 18:05:34.406098   20885 main.go:141] libmachine: (addons-661794)     <boot dev='hd'/>
	I0829 18:05:34.406102   20885 main.go:141] libmachine: (addons-661794)     <bootmenu enable='no'/>
	I0829 18:05:34.406127   20885 main.go:141] libmachine: (addons-661794)   </os>
	I0829 18:05:34.406142   20885 main.go:141] libmachine: (addons-661794)   <devices>
	I0829 18:05:34.406163   20885 main.go:141] libmachine: (addons-661794)     <disk type='file' device='cdrom'>
	I0829 18:05:34.406187   20885 main.go:141] libmachine: (addons-661794)       <source file='/home/jenkins/minikube-integration/19531-13071/.minikube/machines/addons-661794/boot2docker.iso'/>
	I0829 18:05:34.406204   20885 main.go:141] libmachine: (addons-661794)       <target dev='hdc' bus='scsi'/>
	I0829 18:05:34.406210   20885 main.go:141] libmachine: (addons-661794)       <readonly/>
	I0829 18:05:34.406217   20885 main.go:141] libmachine: (addons-661794)     </disk>
	I0829 18:05:34.406228   20885 main.go:141] libmachine: (addons-661794)     <disk type='file' device='disk'>
	I0829 18:05:34.406246   20885 main.go:141] libmachine: (addons-661794)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0829 18:05:34.406265   20885 main.go:141] libmachine: (addons-661794)       <source file='/home/jenkins/minikube-integration/19531-13071/.minikube/machines/addons-661794/addons-661794.rawdisk'/>
	I0829 18:05:34.406279   20885 main.go:141] libmachine: (addons-661794)       <target dev='hda' bus='virtio'/>
	I0829 18:05:34.406286   20885 main.go:141] libmachine: (addons-661794)     </disk>
	I0829 18:05:34.406295   20885 main.go:141] libmachine: (addons-661794)     <interface type='network'>
	I0829 18:05:34.406302   20885 main.go:141] libmachine: (addons-661794)       <source network='mk-addons-661794'/>
	I0829 18:05:34.406310   20885 main.go:141] libmachine: (addons-661794)       <model type='virtio'/>
	I0829 18:05:34.406321   20885 main.go:141] libmachine: (addons-661794)     </interface>
	I0829 18:05:34.406330   20885 main.go:141] libmachine: (addons-661794)     <interface type='network'>
	I0829 18:05:34.406342   20885 main.go:141] libmachine: (addons-661794)       <source network='default'/>
	I0829 18:05:34.406353   20885 main.go:141] libmachine: (addons-661794)       <model type='virtio'/>
	I0829 18:05:34.406361   20885 main.go:141] libmachine: (addons-661794)     </interface>
	I0829 18:05:34.406365   20885 main.go:141] libmachine: (addons-661794)     <serial type='pty'>
	I0829 18:05:34.406374   20885 main.go:141] libmachine: (addons-661794)       <target port='0'/>
	I0829 18:05:34.406378   20885 main.go:141] libmachine: (addons-661794)     </serial>
	I0829 18:05:34.406384   20885 main.go:141] libmachine: (addons-661794)     <console type='pty'>
	I0829 18:05:34.406389   20885 main.go:141] libmachine: (addons-661794)       <target type='serial' port='0'/>
	I0829 18:05:34.406394   20885 main.go:141] libmachine: (addons-661794)     </console>
	I0829 18:05:34.406400   20885 main.go:141] libmachine: (addons-661794)     <rng model='virtio'>
	I0829 18:05:34.406406   20885 main.go:141] libmachine: (addons-661794)       <backend model='random'>/dev/random</backend>
	I0829 18:05:34.406413   20885 main.go:141] libmachine: (addons-661794)     </rng>
	I0829 18:05:34.406422   20885 main.go:141] libmachine: (addons-661794)     
	I0829 18:05:34.406430   20885 main.go:141] libmachine: (addons-661794)     
	I0829 18:05:34.406435   20885 main.go:141] libmachine: (addons-661794)   </devices>
	I0829 18:05:34.406442   20885 main.go:141] libmachine: (addons-661794) </domain>
	I0829 18:05:34.406450   20885 main.go:141] libmachine: (addons-661794) 
	I0829 18:05:34.412046   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined MAC address 52:54:00:da:32:d5 in network default
	I0829 18:05:34.412469   20885 main.go:141] libmachine: (addons-661794) Ensuring networks are active...
	I0829 18:05:34.412488   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:05:34.413203   20885 main.go:141] libmachine: (addons-661794) Ensuring network default is active
	I0829 18:05:34.413571   20885 main.go:141] libmachine: (addons-661794) Ensuring network mk-addons-661794 is active
	I0829 18:05:34.414173   20885 main.go:141] libmachine: (addons-661794) Getting domain xml...
	I0829 18:05:34.414921   20885 main.go:141] libmachine: (addons-661794) Creating domain...
	I0829 18:05:35.824214   20885 main.go:141] libmachine: (addons-661794) Waiting to get IP...
	I0829 18:05:35.824955   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:05:35.825271   20885 main.go:141] libmachine: (addons-661794) DBG | unable to find current IP address of domain addons-661794 in network mk-addons-661794
	I0829 18:05:35.825294   20885 main.go:141] libmachine: (addons-661794) DBG | I0829 18:05:35.825261   20907 retry.go:31] will retry after 308.736923ms: waiting for machine to come up
	I0829 18:05:36.136046   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:05:36.136515   20885 main.go:141] libmachine: (addons-661794) DBG | unable to find current IP address of domain addons-661794 in network mk-addons-661794
	I0829 18:05:36.136544   20885 main.go:141] libmachine: (addons-661794) DBG | I0829 18:05:36.136450   20907 retry.go:31] will retry after 288.544241ms: waiting for machine to come up
	I0829 18:05:36.426999   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:05:36.427405   20885 main.go:141] libmachine: (addons-661794) DBG | unable to find current IP address of domain addons-661794 in network mk-addons-661794
	I0829 18:05:36.427435   20885 main.go:141] libmachine: (addons-661794) DBG | I0829 18:05:36.427348   20907 retry.go:31] will retry after 302.269282ms: waiting for machine to come up
	I0829 18:05:36.731011   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:05:36.731438   20885 main.go:141] libmachine: (addons-661794) DBG | unable to find current IP address of domain addons-661794 in network mk-addons-661794
	I0829 18:05:36.731469   20885 main.go:141] libmachine: (addons-661794) DBG | I0829 18:05:36.731411   20907 retry.go:31] will retry after 495.0499ms: waiting for machine to come up
	I0829 18:05:37.228154   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:05:37.228579   20885 main.go:141] libmachine: (addons-661794) DBG | unable to find current IP address of domain addons-661794 in network mk-addons-661794
	I0829 18:05:37.228604   20885 main.go:141] libmachine: (addons-661794) DBG | I0829 18:05:37.228532   20907 retry.go:31] will retry after 643.75432ms: waiting for machine to come up
	I0829 18:05:37.874386   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:05:37.874832   20885 main.go:141] libmachine: (addons-661794) DBG | unable to find current IP address of domain addons-661794 in network mk-addons-661794
	I0829 18:05:37.874856   20885 main.go:141] libmachine: (addons-661794) DBG | I0829 18:05:37.874789   20907 retry.go:31] will retry after 783.599945ms: waiting for machine to come up
	I0829 18:05:38.660344   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:05:38.660730   20885 main.go:141] libmachine: (addons-661794) DBG | unable to find current IP address of domain addons-661794 in network mk-addons-661794
	I0829 18:05:38.660789   20885 main.go:141] libmachine: (addons-661794) DBG | I0829 18:05:38.660713   20907 retry.go:31] will retry after 722.899936ms: waiting for machine to come up
	I0829 18:05:39.385665   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:05:39.386066   20885 main.go:141] libmachine: (addons-661794) DBG | unable to find current IP address of domain addons-661794 in network mk-addons-661794
	I0829 18:05:39.386091   20885 main.go:141] libmachine: (addons-661794) DBG | I0829 18:05:39.386033   20907 retry.go:31] will retry after 1.1293382s: waiting for machine to come up
	I0829 18:05:40.517307   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:05:40.517677   20885 main.go:141] libmachine: (addons-661794) DBG | unable to find current IP address of domain addons-661794 in network mk-addons-661794
	I0829 18:05:40.517704   20885 main.go:141] libmachine: (addons-661794) DBG | I0829 18:05:40.517638   20907 retry.go:31] will retry after 1.358203958s: waiting for machine to come up
	I0829 18:05:41.878135   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:05:41.878609   20885 main.go:141] libmachine: (addons-661794) DBG | unable to find current IP address of domain addons-661794 in network mk-addons-661794
	I0829 18:05:41.878638   20885 main.go:141] libmachine: (addons-661794) DBG | I0829 18:05:41.878590   20907 retry.go:31] will retry after 1.624753148s: waiting for machine to come up
	I0829 18:05:43.505414   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:05:43.505827   20885 main.go:141] libmachine: (addons-661794) DBG | unable to find current IP address of domain addons-661794 in network mk-addons-661794
	I0829 18:05:43.505949   20885 main.go:141] libmachine: (addons-661794) DBG | I0829 18:05:43.505784   20907 retry.go:31] will retry after 2.501148381s: waiting for machine to come up
	I0829 18:05:46.010345   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:05:46.010805   20885 main.go:141] libmachine: (addons-661794) DBG | unable to find current IP address of domain addons-661794 in network mk-addons-661794
	I0829 18:05:46.010830   20885 main.go:141] libmachine: (addons-661794) DBG | I0829 18:05:46.010756   20907 retry.go:31] will retry after 2.68259862s: waiting for machine to come up
	I0829 18:05:48.695075   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:05:48.695426   20885 main.go:141] libmachine: (addons-661794) DBG | unable to find current IP address of domain addons-661794 in network mk-addons-661794
	I0829 18:05:48.695446   20885 main.go:141] libmachine: (addons-661794) DBG | I0829 18:05:48.695383   20907 retry.go:31] will retry after 4.053217734s: waiting for machine to come up
	I0829 18:05:52.749796   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:05:52.750189   20885 main.go:141] libmachine: (addons-661794) DBG | unable to find current IP address of domain addons-661794 in network mk-addons-661794
	I0829 18:05:52.750210   20885 main.go:141] libmachine: (addons-661794) DBG | I0829 18:05:52.750142   20907 retry.go:31] will retry after 3.76614265s: waiting for machine to come up
	I0829 18:05:56.518980   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:05:56.519411   20885 main.go:141] libmachine: (addons-661794) Found IP for machine: 192.168.39.206
	I0829 18:05:56.519436   20885 main.go:141] libmachine: (addons-661794) Reserving static IP address...
	I0829 18:05:56.519450   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has current primary IP address 192.168.39.206 and MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:05:56.519854   20885 main.go:141] libmachine: (addons-661794) DBG | unable to find host DHCP lease matching {name: "addons-661794", mac: "52:54:00:d6:5e:4e", ip: "192.168.39.206"} in network mk-addons-661794
	I0829 18:05:56.678928   20885 main.go:141] libmachine: (addons-661794) DBG | Getting to WaitForSSH function...
	I0829 18:05:56.678961   20885 main.go:141] libmachine: (addons-661794) Reserved static IP address: 192.168.39.206
	I0829 18:05:56.679003   20885 main.go:141] libmachine: (addons-661794) Waiting for SSH to be available...
	I0829 18:05:56.681642   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:05:56.682062   20885 main.go:141] libmachine: (addons-661794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:5e:4e", ip: ""} in network mk-addons-661794: {Iface:virbr1 ExpiryTime:2024-08-29 19:05:47 +0000 UTC Type:0 Mac:52:54:00:d6:5e:4e Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d6:5e:4e}
	I0829 18:05:56.682091   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined IP address 192.168.39.206 and MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:05:56.682314   20885 main.go:141] libmachine: (addons-661794) DBG | Using SSH client type: external
	I0829 18:05:56.682341   20885 main.go:141] libmachine: (addons-661794) DBG | Using SSH private key: /home/jenkins/minikube-integration/19531-13071/.minikube/machines/addons-661794/id_rsa (-rw-------)
	I0829 18:05:56.682371   20885 main.go:141] libmachine: (addons-661794) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.206 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19531-13071/.minikube/machines/addons-661794/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 18:05:56.682384   20885 main.go:141] libmachine: (addons-661794) DBG | About to run SSH command:
	I0829 18:05:56.682396   20885 main.go:141] libmachine: (addons-661794) DBG | exit 0
	I0829 18:05:56.814171   20885 main.go:141] libmachine: (addons-661794) DBG | SSH cmd err, output: <nil>: 
	I0829 18:05:56.814476   20885 main.go:141] libmachine: (addons-661794) KVM machine creation complete!
	I0829 18:05:56.814750   20885 main.go:141] libmachine: (addons-661794) Calling .GetConfigRaw
	I0829 18:05:56.858571   20885 main.go:141] libmachine: (addons-661794) Calling .DriverName
	I0829 18:05:56.858936   20885 main.go:141] libmachine: (addons-661794) Calling .DriverName
	I0829 18:05:56.859146   20885 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0829 18:05:56.859162   20885 main.go:141] libmachine: (addons-661794) Calling .GetState
	I0829 18:05:56.860581   20885 main.go:141] libmachine: Detecting operating system of created instance...
	I0829 18:05:56.860594   20885 main.go:141] libmachine: Waiting for SSH to be available...
	I0829 18:05:56.860600   20885 main.go:141] libmachine: Getting to WaitForSSH function...
	I0829 18:05:56.860606   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHHostname
	I0829 18:05:56.862818   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:05:56.863121   20885 main.go:141] libmachine: (addons-661794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:5e:4e", ip: ""} in network mk-addons-661794: {Iface:virbr1 ExpiryTime:2024-08-29 19:05:47 +0000 UTC Type:0 Mac:52:54:00:d6:5e:4e Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d6:5e:4e}
	I0829 18:05:56.863144   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined IP address 192.168.39.206 and MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:05:56.863256   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHPort
	I0829 18:05:56.863448   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHKeyPath
	I0829 18:05:56.863610   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHKeyPath
	I0829 18:05:56.863746   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHUsername
	I0829 18:05:56.863992   20885 main.go:141] libmachine: Using SSH client type: native
	I0829 18:05:56.864172   20885 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I0829 18:05:56.864186   20885 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0829 18:05:56.969496   20885 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 18:05:56.969523   20885 main.go:141] libmachine: Detecting the provisioner...
	I0829 18:05:56.969533   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHHostname
	I0829 18:05:56.972432   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:05:56.972826   20885 main.go:141] libmachine: (addons-661794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:5e:4e", ip: ""} in network mk-addons-661794: {Iface:virbr1 ExpiryTime:2024-08-29 19:05:47 +0000 UTC Type:0 Mac:52:54:00:d6:5e:4e Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-661794 Clientid:01:52:54:00:d6:5e:4e}
	I0829 18:05:56.972858   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined IP address 192.168.39.206 and MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:05:56.973022   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHPort
	I0829 18:05:56.973235   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHKeyPath
	I0829 18:05:56.973397   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHKeyPath
	I0829 18:05:56.973514   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHUsername
	I0829 18:05:56.973650   20885 main.go:141] libmachine: Using SSH client type: native
	I0829 18:05:56.973824   20885 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I0829 18:05:56.973837   20885 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0829 18:05:57.078495   20885 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0829 18:05:57.078577   20885 main.go:141] libmachine: found compatible host: buildroot
	I0829 18:05:57.078588   20885 main.go:141] libmachine: Provisioning with buildroot...
	I0829 18:05:57.078597   20885 main.go:141] libmachine: (addons-661794) Calling .GetMachineName
	I0829 18:05:57.078856   20885 buildroot.go:166] provisioning hostname "addons-661794"
	I0829 18:05:57.078887   20885 main.go:141] libmachine: (addons-661794) Calling .GetMachineName
	I0829 18:05:57.079062   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHHostname
	I0829 18:05:57.081765   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:05:57.082130   20885 main.go:141] libmachine: (addons-661794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:5e:4e", ip: ""} in network mk-addons-661794: {Iface:virbr1 ExpiryTime:2024-08-29 19:05:47 +0000 UTC Type:0 Mac:52:54:00:d6:5e:4e Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-661794 Clientid:01:52:54:00:d6:5e:4e}
	I0829 18:05:57.082163   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined IP address 192.168.39.206 and MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:05:57.082357   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHPort
	I0829 18:05:57.082537   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHKeyPath
	I0829 18:05:57.082694   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHKeyPath
	I0829 18:05:57.082855   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHUsername
	I0829 18:05:57.083019   20885 main.go:141] libmachine: Using SSH client type: native
	I0829 18:05:57.083195   20885 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I0829 18:05:57.083207   20885 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-661794 && echo "addons-661794" | sudo tee /etc/hostname
	I0829 18:05:57.200094   20885 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-661794
	
	I0829 18:05:57.200122   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHHostname
	I0829 18:05:57.203055   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:05:57.203469   20885 main.go:141] libmachine: (addons-661794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:5e:4e", ip: ""} in network mk-addons-661794: {Iface:virbr1 ExpiryTime:2024-08-29 19:05:47 +0000 UTC Type:0 Mac:52:54:00:d6:5e:4e Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-661794 Clientid:01:52:54:00:d6:5e:4e}
	I0829 18:05:57.203519   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined IP address 192.168.39.206 and MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:05:57.203667   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHPort
	I0829 18:05:57.203838   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHKeyPath
	I0829 18:05:57.203976   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHKeyPath
	I0829 18:05:57.204156   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHUsername
	I0829 18:05:57.204305   20885 main.go:141] libmachine: Using SSH client type: native
	I0829 18:05:57.204491   20885 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I0829 18:05:57.204506   20885 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-661794' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-661794/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-661794' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 18:05:57.318411   20885 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 18:05:57.318452   20885 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19531-13071/.minikube CaCertPath:/home/jenkins/minikube-integration/19531-13071/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19531-13071/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19531-13071/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19531-13071/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19531-13071/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19531-13071/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19531-13071/.minikube}
	I0829 18:05:57.318480   20885 buildroot.go:174] setting up certificates
	I0829 18:05:57.318491   20885 provision.go:84] configureAuth start
	I0829 18:05:57.318500   20885 main.go:141] libmachine: (addons-661794) Calling .GetMachineName
	I0829 18:05:57.318804   20885 main.go:141] libmachine: (addons-661794) Calling .GetIP
	I0829 18:05:57.321183   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:05:57.321729   20885 main.go:141] libmachine: (addons-661794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:5e:4e", ip: ""} in network mk-addons-661794: {Iface:virbr1 ExpiryTime:2024-08-29 19:05:47 +0000 UTC Type:0 Mac:52:54:00:d6:5e:4e Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-661794 Clientid:01:52:54:00:d6:5e:4e}
	I0829 18:05:57.321753   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined IP address 192.168.39.206 and MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:05:57.322006   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHHostname
	I0829 18:05:57.326253   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:05:57.326669   20885 main.go:141] libmachine: (addons-661794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:5e:4e", ip: ""} in network mk-addons-661794: {Iface:virbr1 ExpiryTime:2024-08-29 19:05:47 +0000 UTC Type:0 Mac:52:54:00:d6:5e:4e Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-661794 Clientid:01:52:54:00:d6:5e:4e}
	I0829 18:05:57.326699   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined IP address 192.168.39.206 and MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:05:57.326906   20885 provision.go:143] copyHostCerts
	I0829 18:05:57.326971   20885 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13071/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19531-13071/.minikube/ca.pem (1082 bytes)
	I0829 18:05:57.327095   20885 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13071/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19531-13071/.minikube/cert.pem (1123 bytes)
	I0829 18:05:57.327157   20885 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13071/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19531-13071/.minikube/key.pem (1675 bytes)
	I0829 18:05:57.327217   20885 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19531-13071/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19531-13071/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19531-13071/.minikube/certs/ca-key.pem org=jenkins.addons-661794 san=[127.0.0.1 192.168.39.206 addons-661794 localhost minikube]
	I0829 18:05:57.440704   20885 provision.go:177] copyRemoteCerts
	I0829 18:05:57.440757   20885 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 18:05:57.440776   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHHostname
	I0829 18:05:57.443310   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:05:57.443607   20885 main.go:141] libmachine: (addons-661794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:5e:4e", ip: ""} in network mk-addons-661794: {Iface:virbr1 ExpiryTime:2024-08-29 19:05:47 +0000 UTC Type:0 Mac:52:54:00:d6:5e:4e Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-661794 Clientid:01:52:54:00:d6:5e:4e}
	I0829 18:05:57.443663   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined IP address 192.168.39.206 and MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:05:57.443839   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHPort
	I0829 18:05:57.444020   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHKeyPath
	I0829 18:05:57.444206   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHUsername
	I0829 18:05:57.444347   20885 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13071/.minikube/machines/addons-661794/id_rsa Username:docker}
	I0829 18:05:57.527364   20885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13071/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0829 18:05:57.550098   20885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13071/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0829 18:05:57.572671   20885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13071/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0829 18:05:57.595566   20885 provision.go:87] duration metric: took 277.057047ms to configureAuth
	I0829 18:05:57.595596   20885 buildroot.go:189] setting minikube options for container-runtime
	I0829 18:05:57.595779   20885 config.go:182] Loaded profile config "addons-661794": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 18:05:57.595801   20885 main.go:141] libmachine: (addons-661794) Calling .DriverName
	I0829 18:05:57.596075   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHHostname
	I0829 18:05:57.598523   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:05:57.598870   20885 main.go:141] libmachine: (addons-661794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:5e:4e", ip: ""} in network mk-addons-661794: {Iface:virbr1 ExpiryTime:2024-08-29 19:05:47 +0000 UTC Type:0 Mac:52:54:00:d6:5e:4e Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-661794 Clientid:01:52:54:00:d6:5e:4e}
	I0829 18:05:57.598903   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined IP address 192.168.39.206 and MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:05:57.599052   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHPort
	I0829 18:05:57.599240   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHKeyPath
	I0829 18:05:57.599383   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHKeyPath
	I0829 18:05:57.599498   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHUsername
	I0829 18:05:57.599618   20885 main.go:141] libmachine: Using SSH client type: native
	I0829 18:05:57.599774   20885 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I0829 18:05:57.599784   20885 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0829 18:05:57.703248   20885 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0829 18:05:57.703275   20885 buildroot.go:70] root file system type: tmpfs
	I0829 18:05:57.703412   20885 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0829 18:05:57.703432   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHHostname
	I0829 18:05:57.706077   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:05:57.706442   20885 main.go:141] libmachine: (addons-661794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:5e:4e", ip: ""} in network mk-addons-661794: {Iface:virbr1 ExpiryTime:2024-08-29 19:05:47 +0000 UTC Type:0 Mac:52:54:00:d6:5e:4e Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-661794 Clientid:01:52:54:00:d6:5e:4e}
	I0829 18:05:57.706470   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined IP address 192.168.39.206 and MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:05:57.706697   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHPort
	I0829 18:05:57.706880   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHKeyPath
	I0829 18:05:57.707021   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHKeyPath
	I0829 18:05:57.707125   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHUsername
	I0829 18:05:57.707254   20885 main.go:141] libmachine: Using SSH client type: native
	I0829 18:05:57.707432   20885 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I0829 18:05:57.707528   20885 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0829 18:05:57.822602   20885 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0829 18:05:57.822628   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHHostname
	I0829 18:05:57.825660   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:05:57.826233   20885 main.go:141] libmachine: (addons-661794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:5e:4e", ip: ""} in network mk-addons-661794: {Iface:virbr1 ExpiryTime:2024-08-29 19:05:47 +0000 UTC Type:0 Mac:52:54:00:d6:5e:4e Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-661794 Clientid:01:52:54:00:d6:5e:4e}
	I0829 18:05:57.826259   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined IP address 192.168.39.206 and MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:05:57.826521   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHPort
	I0829 18:05:57.826697   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHKeyPath
	I0829 18:05:57.826868   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHKeyPath
	I0829 18:05:57.827141   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHUsername
	I0829 18:05:57.827377   20885 main.go:141] libmachine: Using SSH client type: native
	I0829 18:05:57.827553   20885 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I0829 18:05:57.827569   20885 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0829 18:05:59.577561   20885 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0829 18:05:59.577585   20885 main.go:141] libmachine: Checking connection to Docker...
	I0829 18:05:59.577593   20885 main.go:141] libmachine: (addons-661794) Calling .GetURL
	I0829 18:05:59.578792   20885 main.go:141] libmachine: (addons-661794) DBG | Using libvirt version 6000000
	I0829 18:05:59.580638   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:05:59.580960   20885 main.go:141] libmachine: (addons-661794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:5e:4e", ip: ""} in network mk-addons-661794: {Iface:virbr1 ExpiryTime:2024-08-29 19:05:47 +0000 UTC Type:0 Mac:52:54:00:d6:5e:4e Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-661794 Clientid:01:52:54:00:d6:5e:4e}
	I0829 18:05:59.580983   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined IP address 192.168.39.206 and MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:05:59.581147   20885 main.go:141] libmachine: Docker is up and running!
	I0829 18:05:59.581160   20885 main.go:141] libmachine: Reticulating splines...
	I0829 18:05:59.581169   20885 client.go:171] duration metric: took 25.929459027s to LocalClient.Create
	I0829 18:05:59.581196   20885 start.go:167] duration metric: took 25.929526772s to libmachine.API.Create "addons-661794"
	I0829 18:05:59.581208   20885 start.go:293] postStartSetup for "addons-661794" (driver="kvm2")
	I0829 18:05:59.581222   20885 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 18:05:59.581244   20885 main.go:141] libmachine: (addons-661794) Calling .DriverName
	I0829 18:05:59.581502   20885 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 18:05:59.581527   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHHostname
	I0829 18:05:59.583487   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:05:59.583861   20885 main.go:141] libmachine: (addons-661794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:5e:4e", ip: ""} in network mk-addons-661794: {Iface:virbr1 ExpiryTime:2024-08-29 19:05:47 +0000 UTC Type:0 Mac:52:54:00:d6:5e:4e Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-661794 Clientid:01:52:54:00:d6:5e:4e}
	I0829 18:05:59.583892   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined IP address 192.168.39.206 and MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:05:59.583985   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHPort
	I0829 18:05:59.584156   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHKeyPath
	I0829 18:05:59.584290   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHUsername
	I0829 18:05:59.584416   20885 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13071/.minikube/machines/addons-661794/id_rsa Username:docker}
	I0829 18:05:59.668664   20885 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 18:05:59.672947   20885 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 18:05:59.672977   20885 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13071/.minikube/addons for local assets ...
	I0829 18:05:59.673043   20885 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13071/.minikube/files for local assets ...
	I0829 18:05:59.673067   20885 start.go:296] duration metric: took 91.852232ms for postStartSetup
	I0829 18:05:59.673094   20885 main.go:141] libmachine: (addons-661794) Calling .GetConfigRaw
	I0829 18:05:59.673656   20885 main.go:141] libmachine: (addons-661794) Calling .GetIP
	I0829 18:05:59.676062   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:05:59.676399   20885 main.go:141] libmachine: (addons-661794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:5e:4e", ip: ""} in network mk-addons-661794: {Iface:virbr1 ExpiryTime:2024-08-29 19:05:47 +0000 UTC Type:0 Mac:52:54:00:d6:5e:4e Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-661794 Clientid:01:52:54:00:d6:5e:4e}
	I0829 18:05:59.676431   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined IP address 192.168.39.206 and MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:05:59.676601   20885 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/addons-661794/config.json ...
	I0829 18:05:59.676768   20885 start.go:128] duration metric: took 26.042676568s to createHost
	I0829 18:05:59.676788   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHHostname
	I0829 18:05:59.678802   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:05:59.679032   20885 main.go:141] libmachine: (addons-661794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:5e:4e", ip: ""} in network mk-addons-661794: {Iface:virbr1 ExpiryTime:2024-08-29 19:05:47 +0000 UTC Type:0 Mac:52:54:00:d6:5e:4e Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-661794 Clientid:01:52:54:00:d6:5e:4e}
	I0829 18:05:59.679058   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined IP address 192.168.39.206 and MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:05:59.679146   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHPort
	I0829 18:05:59.679334   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHKeyPath
	I0829 18:05:59.679463   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHKeyPath
	I0829 18:05:59.679582   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHUsername
	I0829 18:05:59.679756   20885 main.go:141] libmachine: Using SSH client type: native
	I0829 18:05:59.679944   20885 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I0829 18:05:59.679960   20885 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 18:05:59.786214   20885 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724954759.766451721
	
	I0829 18:05:59.786238   20885 fix.go:216] guest clock: 1724954759.766451721
	I0829 18:05:59.786248   20885 fix.go:229] Guest: 2024-08-29 18:05:59.766451721 +0000 UTC Remote: 2024-08-29 18:05:59.676779 +0000 UTC m=+26.140750243 (delta=89.672721ms)
	I0829 18:05:59.786286   20885 fix.go:200] guest clock delta is within tolerance: 89.672721ms
	I0829 18:05:59.786293   20885 start.go:83] releasing machines lock for "addons-661794", held for 26.152285773s
	I0829 18:05:59.786315   20885 main.go:141] libmachine: (addons-661794) Calling .DriverName
	I0829 18:05:59.786565   20885 main.go:141] libmachine: (addons-661794) Calling .GetIP
	I0829 18:05:59.789080   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:05:59.789442   20885 main.go:141] libmachine: (addons-661794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:5e:4e", ip: ""} in network mk-addons-661794: {Iface:virbr1 ExpiryTime:2024-08-29 19:05:47 +0000 UTC Type:0 Mac:52:54:00:d6:5e:4e Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-661794 Clientid:01:52:54:00:d6:5e:4e}
	I0829 18:05:59.789474   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined IP address 192.168.39.206 and MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:05:59.789595   20885 main.go:141] libmachine: (addons-661794) Calling .DriverName
	I0829 18:05:59.790012   20885 main.go:141] libmachine: (addons-661794) Calling .DriverName
	I0829 18:05:59.790169   20885 main.go:141] libmachine: (addons-661794) Calling .DriverName
	I0829 18:05:59.790256   20885 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 18:05:59.790305   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHHostname
	I0829 18:05:59.790399   20885 ssh_runner.go:195] Run: cat /version.json
	I0829 18:05:59.790419   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHHostname
	I0829 18:05:59.793123   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:05:59.793340   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:05:59.793593   20885 main.go:141] libmachine: (addons-661794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:5e:4e", ip: ""} in network mk-addons-661794: {Iface:virbr1 ExpiryTime:2024-08-29 19:05:47 +0000 UTC Type:0 Mac:52:54:00:d6:5e:4e Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-661794 Clientid:01:52:54:00:d6:5e:4e}
	I0829 18:05:59.793626   20885 main.go:141] libmachine: (addons-661794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:5e:4e", ip: ""} in network mk-addons-661794: {Iface:virbr1 ExpiryTime:2024-08-29 19:05:47 +0000 UTC Type:0 Mac:52:54:00:d6:5e:4e Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-661794 Clientid:01:52:54:00:d6:5e:4e}
	I0829 18:05:59.793649   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined IP address 192.168.39.206 and MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:05:59.793719   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined IP address 192.168.39.206 and MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:05:59.793840   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHPort
	I0829 18:05:59.794022   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHKeyPath
	I0829 18:05:59.794059   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHPort
	I0829 18:05:59.794205   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHKeyPath
	I0829 18:05:59.794205   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHUsername
	I0829 18:05:59.794336   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHUsername
	I0829 18:05:59.794397   20885 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13071/.minikube/machines/addons-661794/id_rsa Username:docker}
	I0829 18:05:59.794510   20885 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13071/.minikube/machines/addons-661794/id_rsa Username:docker}
	I0829 18:05:59.893855   20885 ssh_runner.go:195] Run: systemctl --version
	I0829 18:05:59.899496   20885 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 18:05:59.904741   20885 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 18:05:59.904798   20885 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 18:05:59.919976   20885 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0829 18:05:59.920012   20885 start.go:495] detecting cgroup driver to use...
	I0829 18:05:59.920140   20885 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 18:05:59.937332   20885 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0829 18:05:59.947145   20885 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0829 18:05:59.956916   20885 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0829 18:05:59.956977   20885 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0829 18:05:59.966855   20885 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0829 18:05:59.976695   20885 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0829 18:05:59.986457   20885 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0829 18:05:59.996159   20885 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 18:06:00.006398   20885 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0829 18:06:00.016607   20885 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0829 18:06:00.026387   20885 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0829 18:06:00.036290   20885 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 18:06:00.045365   20885 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 18:06:00.054464   20885 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 18:06:00.167705   20885 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0829 18:06:00.191816   20885 start.go:495] detecting cgroup driver to use...
	I0829 18:06:00.191897   20885 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0829 18:06:00.205974   20885 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 18:06:00.218429   20885 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 18:06:00.235788   20885 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 18:06:00.248227   20885 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0829 18:06:00.262863   20885 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0829 18:06:00.292108   20885 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0829 18:06:00.304915   20885 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 18:06:00.322201   20885 ssh_runner.go:195] Run: which cri-dockerd
	I0829 18:06:00.325694   20885 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0829 18:06:00.334349   20885 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0829 18:06:00.350113   20885 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0829 18:06:00.459816   20885 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0829 18:06:00.584593   20885 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0829 18:06:00.584876   20885 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0829 18:06:00.601863   20885 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 18:06:00.715445   20885 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0829 18:06:03.666360   20885 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.950874572s)
	I0829 18:06:03.666440   20885 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0829 18:06:03.680331   20885 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0829 18:06:03.693448   20885 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0829 18:06:03.809228   20885 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0829 18:06:03.932167   20885 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 18:06:04.054817   20885 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0829 18:06:04.072002   20885 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0829 18:06:04.085213   20885 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 18:06:04.203699   20885 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0829 18:06:04.274054   20885 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0829 18:06:04.274141   20885 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0829 18:06:04.279704   20885 start.go:563] Will wait 60s for crictl version
	I0829 18:06:04.279756   20885 ssh_runner.go:195] Run: which crictl
	I0829 18:06:04.283433   20885 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 18:06:04.321273   20885 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0829 18:06:04.321343   20885 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0829 18:06:04.347075   20885 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0829 18:06:04.370582   20885 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0829 18:06:04.370622   20885 main.go:141] libmachine: (addons-661794) Calling .GetIP
	I0829 18:06:04.373197   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:06:04.373565   20885 main.go:141] libmachine: (addons-661794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:5e:4e", ip: ""} in network mk-addons-661794: {Iface:virbr1 ExpiryTime:2024-08-29 19:05:47 +0000 UTC Type:0 Mac:52:54:00:d6:5e:4e Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-661794 Clientid:01:52:54:00:d6:5e:4e}
	I0829 18:06:04.373595   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined IP address 192.168.39.206 and MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:06:04.373834   20885 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0829 18:06:04.377809   20885 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 18:06:04.389771   20885 kubeadm.go:883] updating cluster {Name:addons-661794 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:addons-661794 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 18:06:04.389886   20885 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0829 18:06:04.389942   20885 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0829 18:06:04.404694   20885 docker.go:685] Got preloaded images: 
	I0829 18:06:04.404727   20885 docker.go:691] registry.k8s.io/kube-apiserver:v1.31.0 wasn't preloaded
	I0829 18:06:04.404782   20885 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0829 18:06:04.414080   20885 ssh_runner.go:195] Run: which lz4
	I0829 18:06:04.417749   20885 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0829 18:06:04.422112   20885 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0829 18:06:04.422152   20885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (342554258 bytes)
	I0829 18:06:05.517115   20885 docker.go:649] duration metric: took 1.099401944s to copy over tarball
	I0829 18:06:05.517200   20885 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0829 18:06:07.413717   20885 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.896491862s)
	I0829 18:06:07.413743   20885 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0829 18:06:07.446823   20885 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0829 18:06:07.456898   20885 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0829 18:06:07.473171   20885 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 18:06:07.578462   20885 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0829 18:06:11.697585   20885 ssh_runner.go:235] Completed: sudo systemctl restart docker: (4.119087997s)
	I0829 18:06:11.697699   20885 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0829 18:06:11.717685   20885 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0829 18:06:11.717709   20885 cache_images.go:84] Images are preloaded, skipping loading
	I0829 18:06:11.717729   20885 kubeadm.go:934] updating node { 192.168.39.206 8443 v1.31.0 docker true true} ...
	I0829 18:06:11.717834   20885 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-661794 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.206
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-661794 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 18:06:11.717887   20885 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0829 18:06:11.769081   20885 cni.go:84] Creating CNI manager for ""
	I0829 18:06:11.769111   20885 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0829 18:06:11.769133   20885 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 18:06:11.769152   20885 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.206 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-661794 NodeName:addons-661794 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.206"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.206 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0829 18:06:11.769281   20885 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.206
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-661794"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.206
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.206"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 18:06:11.769338   20885 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 18:06:11.778584   20885 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 18:06:11.778638   20885 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 18:06:11.787425   20885 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0829 18:06:11.803195   20885 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 18:06:11.819156   20885 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0829 18:06:11.834816   20885 ssh_runner.go:195] Run: grep 192.168.39.206	control-plane.minikube.internal$ /etc/hosts
	I0829 18:06:11.838738   20885 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.206	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 18:06:11.850221   20885 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 18:06:11.961888   20885 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 18:06:11.980850   20885 certs.go:68] Setting up /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/addons-661794 for IP: 192.168.39.206
	I0829 18:06:11.980887   20885 certs.go:194] generating shared ca certs ...
	I0829 18:06:11.980908   20885 certs.go:226] acquiring lock for ca certs: {Name:mk4c8f2802cc8dd241a72e42c126d7bccb015169 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:11.981088   20885 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19531-13071/.minikube/ca.key
	I0829 18:06:12.153621   20885 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19531-13071/.minikube/ca.crt ...
	I0829 18:06:12.153653   20885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13071/.minikube/ca.crt: {Name:mke70ad6a5a4173abebb382a517ab9c478e44be2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:12.153815   20885 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19531-13071/.minikube/ca.key ...
	I0829 18:06:12.153824   20885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13071/.minikube/ca.key: {Name:mkd49877aa9c347d09272b3e5dac0b045b2e92cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:12.153895   20885 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19531-13071/.minikube/proxy-client-ca.key
	I0829 18:06:12.294181   20885 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19531-13071/.minikube/proxy-client-ca.crt ...
	I0829 18:06:12.294212   20885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13071/.minikube/proxy-client-ca.crt: {Name:mk50394d6dd5de4c7e4edf9bb4bfe150d4d88574 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:12.294373   20885 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19531-13071/.minikube/proxy-client-ca.key ...
	I0829 18:06:12.294384   20885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13071/.minikube/proxy-client-ca.key: {Name:mkc7e56270498ae7d36317bb498cbcb8f5ec26e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:12.294451   20885 certs.go:256] generating profile certs ...
	I0829 18:06:12.294501   20885 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/addons-661794/client.key
	I0829 18:06:12.294514   20885 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/addons-661794/client.crt with IP's: []
	I0829 18:06:12.618995   20885 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/addons-661794/client.crt ...
	I0829 18:06:12.619029   20885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/addons-661794/client.crt: {Name:mk4bf503a8926f962c0bc40bdf2f429c08fd53d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:12.619213   20885 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/addons-661794/client.key ...
	I0829 18:06:12.619227   20885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/addons-661794/client.key: {Name:mk21936d7055460490c16d47e5668804cb378f9e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:12.619319   20885 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/addons-661794/apiserver.key.10f24c83
	I0829 18:06:12.619342   20885 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/addons-661794/apiserver.crt.10f24c83 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.206]
	I0829 18:06:12.738303   20885 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/addons-661794/apiserver.crt.10f24c83 ...
	I0829 18:06:12.738333   20885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/addons-661794/apiserver.crt.10f24c83: {Name:mkb4a17199d7db763b89614694b9c2484054ac31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:12.738514   20885 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/addons-661794/apiserver.key.10f24c83 ...
	I0829 18:06:12.738531   20885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/addons-661794/apiserver.key.10f24c83: {Name:mk432d2f688f8069f4c8dc78a3ccb21b2b4bfe29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:12.738624   20885 certs.go:381] copying /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/addons-661794/apiserver.crt.10f24c83 -> /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/addons-661794/apiserver.crt
	I0829 18:06:12.738716   20885 certs.go:385] copying /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/addons-661794/apiserver.key.10f24c83 -> /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/addons-661794/apiserver.key
	I0829 18:06:12.738786   20885 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/addons-661794/proxy-client.key
	I0829 18:06:12.738810   20885 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/addons-661794/proxy-client.crt with IP's: []
	I0829 18:06:12.829853   20885 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/addons-661794/proxy-client.crt ...
	I0829 18:06:12.829885   20885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/addons-661794/proxy-client.crt: {Name:mk93d857549146ad80448a794c73eeaec8859af5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:12.830080   20885 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/addons-661794/proxy-client.key ...
	I0829 18:06:12.830096   20885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/addons-661794/proxy-client.key: {Name:mk44eeb5edd7395ef3a39546c86db7df00406294 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:12.830279   20885 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13071/.minikube/certs/ca-key.pem (1679 bytes)
	I0829 18:06:12.830321   20885 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13071/.minikube/certs/ca.pem (1082 bytes)
	I0829 18:06:12.830356   20885 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13071/.minikube/certs/cert.pem (1123 bytes)
	I0829 18:06:12.830394   20885 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13071/.minikube/certs/key.pem (1675 bytes)
	I0829 18:06:12.830994   20885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13071/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 18:06:12.854604   20885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13071/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0829 18:06:12.876925   20885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13071/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 18:06:12.899126   20885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13071/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 18:06:12.922577   20885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/addons-661794/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0829 18:06:12.945622   20885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/addons-661794/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0829 18:06:12.969015   20885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/addons-661794/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 18:06:12.992485   20885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/addons-661794/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0829 18:06:13.015879   20885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13071/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 18:06:13.037988   20885 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 18:06:13.053833   20885 ssh_runner.go:195] Run: openssl version
	I0829 18:06:13.059458   20885 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 18:06:13.069745   20885 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 18:06:13.074247   20885 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:06 /usr/share/ca-certificates/minikubeCA.pem
	I0829 18:06:13.074306   20885 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 18:06:13.079877   20885 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 18:06:13.090571   20885 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 18:06:13.094658   20885 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0829 18:06:13.094717   20885 kubeadm.go:392] StartCluster: {Name:addons-661794 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:addons-661794 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 18:06:13.094851   20885 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0829 18:06:13.111113   20885 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 18:06:13.120677   20885 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 18:06:13.129864   20885 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 18:06:13.139460   20885 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 18:06:13.139477   20885 kubeadm.go:157] found existing configuration files:
	
	I0829 18:06:13.139519   20885 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 18:06:13.148536   20885 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 18:06:13.148591   20885 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 18:06:13.158157   20885 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 18:06:13.167560   20885 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 18:06:13.167622   20885 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 18:06:13.176914   20885 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 18:06:13.185943   20885 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 18:06:13.186029   20885 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 18:06:13.195671   20885 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 18:06:13.204533   20885 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 18:06:13.204604   20885 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 18:06:13.214009   20885 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 18:06:13.261579   20885 kubeadm.go:310] W0829 18:06:13.244643    1500 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 18:06:13.262433   20885 kubeadm.go:310] W0829 18:06:13.245709    1500 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 18:06:13.360081   20885 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 18:06:22.969920   20885 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0829 18:06:22.970024   20885 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 18:06:22.970160   20885 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 18:06:22.970307   20885 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 18:06:22.970431   20885 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0829 18:06:22.970516   20885 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 18:06:22.972203   20885 out.go:235]   - Generating certificates and keys ...
	I0829 18:06:22.972290   20885 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 18:06:22.972354   20885 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 18:06:22.972448   20885 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0829 18:06:22.972529   20885 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0829 18:06:22.972589   20885 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0829 18:06:22.972639   20885 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0829 18:06:22.972693   20885 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0829 18:06:22.972852   20885 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-661794 localhost] and IPs [192.168.39.206 127.0.0.1 ::1]
	I0829 18:06:22.972923   20885 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0829 18:06:22.973114   20885 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-661794 localhost] and IPs [192.168.39.206 127.0.0.1 ::1]
	I0829 18:06:22.973209   20885 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0829 18:06:22.973287   20885 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0829 18:06:22.973344   20885 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0829 18:06:22.973398   20885 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 18:06:22.973473   20885 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 18:06:22.973536   20885 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0829 18:06:22.973581   20885 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 18:06:22.973653   20885 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 18:06:22.973715   20885 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 18:06:22.973786   20885 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 18:06:22.973844   20885 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 18:06:22.975564   20885 out.go:235]   - Booting up control plane ...
	I0829 18:06:22.975653   20885 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 18:06:22.975760   20885 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 18:06:22.975847   20885 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 18:06:22.975974   20885 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 18:06:22.976092   20885 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 18:06:22.976166   20885 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 18:06:22.976310   20885 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0829 18:06:22.976437   20885 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0829 18:06:22.976523   20885 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001084664s
	I0829 18:06:22.976615   20885 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0829 18:06:22.976700   20885 kubeadm.go:310] [api-check] The API server is healthy after 5.002413462s
	I0829 18:06:22.976852   20885 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0829 18:06:22.977016   20885 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0829 18:06:22.977096   20885 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0829 18:06:22.977343   20885 kubeadm.go:310] [mark-control-plane] Marking the node addons-661794 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0829 18:06:22.977431   20885 kubeadm.go:310] [bootstrap-token] Using token: pd4g5c.r7npa6ssa1qfdjav
	I0829 18:06:22.979051   20885 out.go:235]   - Configuring RBAC rules ...
	I0829 18:06:22.979192   20885 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0829 18:06:22.979300   20885 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0829 18:06:22.979496   20885 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0829 18:06:22.979651   20885 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0829 18:06:22.979783   20885 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0829 18:06:22.979863   20885 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0829 18:06:22.979956   20885 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0829 18:06:22.980011   20885 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0829 18:06:22.980056   20885 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0829 18:06:22.980064   20885 kubeadm.go:310] 
	I0829 18:06:22.980116   20885 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0829 18:06:22.980121   20885 kubeadm.go:310] 
	I0829 18:06:22.980213   20885 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0829 18:06:22.980220   20885 kubeadm.go:310] 
	I0829 18:06:22.980243   20885 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0829 18:06:22.980294   20885 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0829 18:06:22.980337   20885 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0829 18:06:22.980343   20885 kubeadm.go:310] 
	I0829 18:06:22.980400   20885 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0829 18:06:22.980406   20885 kubeadm.go:310] 
	I0829 18:06:22.980444   20885 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0829 18:06:22.980449   20885 kubeadm.go:310] 
	I0829 18:06:22.980496   20885 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0829 18:06:22.980578   20885 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0829 18:06:22.980641   20885 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0829 18:06:22.980647   20885 kubeadm.go:310] 
	I0829 18:06:22.980716   20885 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0829 18:06:22.980821   20885 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0829 18:06:22.980834   20885 kubeadm.go:310] 
	I0829 18:06:22.980935   20885 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token pd4g5c.r7npa6ssa1qfdjav \
	I0829 18:06:22.981058   20885 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e5098688a5f0d4992fc17bef8174f5834791bef2486200f516bb6e907554c943 \
	I0829 18:06:22.981087   20885 kubeadm.go:310] 	--control-plane 
	I0829 18:06:22.981097   20885 kubeadm.go:310] 
	I0829 18:06:22.981166   20885 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0829 18:06:22.981174   20885 kubeadm.go:310] 
	I0829 18:06:22.981245   20885 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token pd4g5c.r7npa6ssa1qfdjav \
	I0829 18:06:22.981347   20885 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e5098688a5f0d4992fc17bef8174f5834791bef2486200f516bb6e907554c943 
	I0829 18:06:22.981356   20885 cni.go:84] Creating CNI manager for ""
	I0829 18:06:22.981368   20885 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0829 18:06:22.982808   20885 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 18:06:22.983976   20885 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 18:06:22.994205   20885 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 18:06:23.011011   20885 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0829 18:06:23.011080   20885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:23.011088   20885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-661794 minikube.k8s.io/updated_at=2024_08_29T18_06_23_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=95341f0b655cea8be5ebfc6bf112c8367dc08d33 minikube.k8s.io/name=addons-661794 minikube.k8s.io/primary=true
	I0829 18:06:23.145170   20885 ops.go:34] apiserver oom_adj: -16
	I0829 18:06:23.145355   20885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:23.646224   20885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:24.146267   20885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:24.645428   20885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:25.146132   20885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:25.645352   20885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:26.146184   20885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:26.646079   20885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:27.146409   20885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:27.235625   20885 kubeadm.go:1113] duration metric: took 4.22460551s to wait for elevateKubeSystemPrivileges
	I0829 18:06:27.235664   20885 kubeadm.go:394] duration metric: took 14.140951426s to StartCluster
	I0829 18:06:27.235687   20885 settings.go:142] acquiring lock: {Name:mkb45f41da43466ff331a3e1c94089fb513d8d8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:27.235814   20885 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19531-13071/kubeconfig
	I0829 18:06:27.236170   20885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13071/kubeconfig: {Name:mk7321669670c286c19764a9599421bfdb1d70fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:27.236357   20885 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0829 18:06:27.236383   20885 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0829 18:06:27.236452   20885 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0829 18:06:27.236548   20885 addons.go:69] Setting yakd=true in profile "addons-661794"
	I0829 18:06:27.236558   20885 addons.go:69] Setting helm-tiller=true in profile "addons-661794"
	I0829 18:06:27.236578   20885 config.go:182] Loaded profile config "addons-661794": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 18:06:27.236579   20885 addons.go:69] Setting default-storageclass=true in profile "addons-661794"
	I0829 18:06:27.236593   20885 addons.go:234] Setting addon helm-tiller=true in "addons-661794"
	I0829 18:06:27.236602   20885 addons.go:69] Setting metrics-server=true in profile "addons-661794"
	I0829 18:06:27.236584   20885 addons.go:234] Setting addon yakd=true in "addons-661794"
	I0829 18:06:27.236627   20885 addons.go:69] Setting registry=true in profile "addons-661794"
	I0829 18:06:27.236627   20885 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-661794"
	I0829 18:06:27.236636   20885 host.go:66] Checking if "addons-661794" exists ...
	I0829 18:06:27.236643   20885 host.go:66] Checking if "addons-661794" exists ...
	I0829 18:06:27.236648   20885 addons.go:69] Setting storage-provisioner=true in profile "addons-661794"
	I0829 18:06:27.236685   20885 addons.go:234] Setting addon storage-provisioner=true in "addons-661794"
	I0829 18:06:27.236718   20885 host.go:66] Checking if "addons-661794" exists ...
	I0829 18:06:27.236594   20885 addons.go:69] Setting ingress-dns=true in profile "addons-661794"
	I0829 18:06:27.236775   20885 addons.go:234] Setting addon ingress-dns=true in "addons-661794"
	I0829 18:06:27.236804   20885 host.go:66] Checking if "addons-661794" exists ...
	I0829 18:06:27.236852   20885 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-661794"
	I0829 18:06:27.236905   20885 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-661794"
	I0829 18:06:27.236942   20885 host.go:66] Checking if "addons-661794" exists ...
	I0829 18:06:27.237008   20885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0829 18:06:27.237027   20885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:27.237094   20885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0829 18:06:27.237117   20885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:27.237149   20885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0829 18:06:27.237186   20885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:27.237310   20885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0829 18:06:27.236598   20885 addons.go:69] Setting inspektor-gadget=true in profile "addons-661794"
	I0829 18:06:27.237347   20885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:27.237363   20885 addons.go:234] Setting addon inspektor-gadget=true in "addons-661794"
	I0829 18:06:27.236643   20885 addons.go:234] Setting addon registry=true in "addons-661794"
	I0829 18:06:27.237406   20885 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-661794"
	I0829 18:06:27.237411   20885 addons.go:69] Setting gcp-auth=true in profile "addons-661794"
	I0829 18:06:27.237417   20885 host.go:66] Checking if "addons-661794" exists ...
	I0829 18:06:27.237429   20885 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-661794"
	I0829 18:06:27.237432   20885 mustload.go:65] Loading cluster: addons-661794
	I0829 18:06:27.237589   20885 addons.go:69] Setting volcano=true in profile "addons-661794"
	I0829 18:06:27.237602   20885 config.go:182] Loaded profile config "addons-661794": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 18:06:27.237628   20885 addons.go:234] Setting addon volcano=true in "addons-661794"
	I0829 18:06:27.237657   20885 host.go:66] Checking if "addons-661794" exists ...
	I0829 18:06:27.237732   20885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0829 18:06:27.237753   20885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:27.237774   20885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0829 18:06:27.236590   20885 addons.go:69] Setting ingress=true in profile "addons-661794"
	I0829 18:06:27.237800   20885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0829 18:06:27.237809   20885 addons.go:69] Setting volumesnapshots=true in profile "addons-661794"
	I0829 18:06:27.237813   20885 addons.go:234] Setting addon ingress=true in "addons-661794"
	I0829 18:06:27.237829   20885 addons.go:234] Setting addon volumesnapshots=true in "addons-661794"
	I0829 18:06:27.237803   20885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:27.236623   20885 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-661794"
	I0829 18:06:27.237860   20885 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-661794"
	I0829 18:06:27.237830   20885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:27.237892   20885 host.go:66] Checking if "addons-661794" exists ...
	I0829 18:06:27.237944   20885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0829 18:06:27.237967   20885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:27.236619   20885 addons.go:234] Setting addon metrics-server=true in "addons-661794"
	I0829 18:06:27.238041   20885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0829 18:06:27.238061   20885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:27.238179   20885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0829 18:06:27.238204   20885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:27.238264   20885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0829 18:06:27.238290   20885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:27.238324   20885 host.go:66] Checking if "addons-661794" exists ...
	I0829 18:06:27.238432   20885 host.go:66] Checking if "addons-661794" exists ...
	I0829 18:06:27.238527   20885 host.go:66] Checking if "addons-661794" exists ...
	I0829 18:06:27.238771   20885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0829 18:06:27.238807   20885 out.go:177] * Verifying Kubernetes components...
	I0829 18:06:27.238869   20885 host.go:66] Checking if "addons-661794" exists ...
	I0829 18:06:27.238970   20885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0829 18:06:27.239000   20885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:27.238841   20885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:27.238847   20885 addons.go:69] Setting cloud-spanner=true in profile "addons-661794"
	I0829 18:06:27.239261   20885 addons.go:234] Setting addon cloud-spanner=true in "addons-661794"
	I0829 18:06:27.238793   20885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0829 18:06:27.239289   20885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:27.239597   20885 host.go:66] Checking if "addons-661794" exists ...
	I0829 18:06:27.240301   20885 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 18:06:27.257509   20885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38897
	I0829 18:06:27.257708   20885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36551
	I0829 18:06:27.258059   20885 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:27.258201   20885 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:27.258695   20885 main.go:141] libmachine: Using API Version  1
	I0829 18:06:27.258717   20885 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:27.258784   20885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33003
	I0829 18:06:27.258797   20885 main.go:141] libmachine: Using API Version  1
	I0829 18:06:27.258809   20885 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:27.259059   20885 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:27.259099   20885 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:27.259112   20885 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:27.259636   20885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0829 18:06:27.259677   20885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:27.259847   20885 main.go:141] libmachine: Using API Version  1
	I0829 18:06:27.259870   20885 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:27.259944   20885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34251
	I0829 18:06:27.260066   20885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0829 18:06:27.260066   20885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35153
	I0829 18:06:27.260084   20885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:27.260194   20885 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:27.260248   20885 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:27.260452   20885 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:27.260727   20885 main.go:141] libmachine: Using API Version  1
	I0829 18:06:27.260750   20885 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:27.261175   20885 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:27.261343   20885 main.go:141] libmachine: Using API Version  1
	I0829 18:06:27.261359   20885 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:27.261420   20885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0829 18:06:27.261459   20885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:27.262081   20885 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:27.262251   20885 main.go:141] libmachine: (addons-661794) Calling .GetState
	I0829 18:06:27.268253   20885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40371
	I0829 18:06:27.274339   20885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0829 18:06:27.274388   20885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:27.275298   20885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0829 18:06:27.275337   20885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:27.275823   20885 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-661794"
	I0829 18:06:27.275866   20885 host.go:66] Checking if "addons-661794" exists ...
	I0829 18:06:27.276144   20885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0829 18:06:27.276168   20885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:27.280714   20885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0829 18:06:27.280767   20885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:27.281353   20885 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:27.282024   20885 main.go:141] libmachine: Using API Version  1
	I0829 18:06:27.282052   20885 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:27.283023   20885 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:27.283242   20885 main.go:141] libmachine: (addons-661794) Calling .GetState
	I0829 18:06:27.286670   20885 addons.go:234] Setting addon default-storageclass=true in "addons-661794"
	I0829 18:06:27.286714   20885 host.go:66] Checking if "addons-661794" exists ...
	I0829 18:06:27.287077   20885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0829 18:06:27.287113   20885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:27.297126   20885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33775
	I0829 18:06:27.297750   20885 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:27.299950   20885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45599
	I0829 18:06:27.300614   20885 main.go:141] libmachine: Using API Version  1
	I0829 18:06:27.300636   20885 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:27.301029   20885 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:27.301401   20885 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:27.302084   20885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0829 18:06:27.302111   20885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:27.302759   20885 main.go:141] libmachine: Using API Version  1
	I0829 18:06:27.302777   20885 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:27.303163   20885 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:27.303365   20885 main.go:141] libmachine: (addons-661794) Calling .GetState
	I0829 18:06:27.304318   20885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40849
	I0829 18:06:27.304706   20885 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:27.305213   20885 main.go:141] libmachine: Using API Version  1
	I0829 18:06:27.305231   20885 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:27.305613   20885 main.go:141] libmachine: (addons-661794) Calling .DriverName
	I0829 18:06:27.305658   20885 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:27.305861   20885 main.go:141] libmachine: (addons-661794) Calling .GetState
	I0829 18:06:27.307641   20885 main.go:141] libmachine: (addons-661794) Calling .DriverName
	I0829 18:06:27.308587   20885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39807
	I0829 18:06:27.309085   20885 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:27.309627   20885 main.go:141] libmachine: Using API Version  1
	I0829 18:06:27.309653   20885 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:27.309772   20885 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0829 18:06:27.309825   20885 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 18:06:27.310143   20885 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:27.310837   20885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0829 18:06:27.310883   20885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:27.311334   20885 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 18:06:27.311355   20885 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0829 18:06:27.311373   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHHostname
	I0829 18:06:27.312172   20885 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0829 18:06:27.313246   20885 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0829 18:06:27.314288   20885 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0829 18:06:27.315233   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:06:27.315853   20885 main.go:141] libmachine: (addons-661794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:5e:4e", ip: ""} in network mk-addons-661794: {Iface:virbr1 ExpiryTime:2024-08-29 19:05:47 +0000 UTC Type:0 Mac:52:54:00:d6:5e:4e Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-661794 Clientid:01:52:54:00:d6:5e:4e}
	I0829 18:06:27.315874   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined IP address 192.168.39.206 and MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:06:27.316080   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHPort
	I0829 18:06:27.316205   20885 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0829 18:06:27.316266   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHKeyPath
	I0829 18:06:27.316407   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHUsername
	I0829 18:06:27.316510   20885 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13071/.minikube/machines/addons-661794/id_rsa Username:docker}
	I0829 18:06:27.318384   20885 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0829 18:06:27.319412   20885 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0829 18:06:27.319779   20885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33447
	I0829 18:06:27.320337   20885 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:27.320943   20885 main.go:141] libmachine: Using API Version  1
	I0829 18:06:27.320963   20885 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:27.321416   20885 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:27.321628   20885 main.go:141] libmachine: (addons-661794) Calling .GetState
	I0829 18:06:27.321643   20885 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0829 18:06:27.322751   20885 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0829 18:06:27.322776   20885 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0829 18:06:27.322792   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHHostname
	I0829 18:06:27.324317   20885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34339
	I0829 18:06:27.324816   20885 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:27.325335   20885 main.go:141] libmachine: Using API Version  1
	I0829 18:06:27.325358   20885 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:27.325421   20885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39377
	I0829 18:06:27.325532   20885 main.go:141] libmachine: (addons-661794) Calling .DriverName
	I0829 18:06:27.326068   20885 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:27.326522   20885 main.go:141] libmachine: Using API Version  1
	I0829 18:06:27.326542   20885 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:27.326825   20885 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:27.327015   20885 main.go:141] libmachine: (addons-661794) Calling .GetState
	I0829 18:06:27.327553   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:06:27.327713   20885 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:27.327760   20885 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0829 18:06:27.327804   20885 main.go:141] libmachine: (addons-661794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:5e:4e", ip: ""} in network mk-addons-661794: {Iface:virbr1 ExpiryTime:2024-08-29 19:05:47 +0000 UTC Type:0 Mac:52:54:00:d6:5e:4e Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-661794 Clientid:01:52:54:00:d6:5e:4e}
	I0829 18:06:27.327820   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined IP address 192.168.39.206 and MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:06:27.328113   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHPort
	I0829 18:06:27.328277   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHKeyPath
	I0829 18:06:27.328398   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHUsername
	I0829 18:06:27.328517   20885 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13071/.minikube/machines/addons-661794/id_rsa Username:docker}
	I0829 18:06:27.328813   20885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34461
	I0829 18:06:27.329484   20885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0829 18:06:27.329523   20885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:27.329778   20885 main.go:141] libmachine: (addons-661794) Calling .DriverName
	I0829 18:06:27.329812   20885 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0829 18:06:27.329835   20885 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0829 18:06:27.329853   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHHostname
	I0829 18:06:27.330111   20885 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:27.330678   20885 main.go:141] libmachine: Using API Version  1
	I0829 18:06:27.330697   20885 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:27.331285   20885 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:27.331700   20885 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0829 18:06:27.332495   20885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0829 18:06:27.332780   20885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:27.333280   20885 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0829 18:06:27.333308   20885 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0829 18:06:27.333332   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHHostname
	I0829 18:06:27.333946   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:06:27.334692   20885 main.go:141] libmachine: (addons-661794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:5e:4e", ip: ""} in network mk-addons-661794: {Iface:virbr1 ExpiryTime:2024-08-29 19:05:47 +0000 UTC Type:0 Mac:52:54:00:d6:5e:4e Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-661794 Clientid:01:52:54:00:d6:5e:4e}
	I0829 18:06:27.334726   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined IP address 192.168.39.206 and MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:06:27.334949   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHPort
	I0829 18:06:27.334989   20885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34705
	I0829 18:06:27.335268   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHKeyPath
	I0829 18:06:27.335677   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHUsername
	I0829 18:06:27.335753   20885 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:27.336005   20885 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13071/.minikube/machines/addons-661794/id_rsa Username:docker}
	I0829 18:06:27.336389   20885 main.go:141] libmachine: Using API Version  1
	I0829 18:06:27.336403   20885 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:27.336835   20885 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:27.337015   20885 main.go:141] libmachine: (addons-661794) Calling .GetState
	I0829 18:06:27.337783   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:06:27.338289   20885 main.go:141] libmachine: (addons-661794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:5e:4e", ip: ""} in network mk-addons-661794: {Iface:virbr1 ExpiryTime:2024-08-29 19:05:47 +0000 UTC Type:0 Mac:52:54:00:d6:5e:4e Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-661794 Clientid:01:52:54:00:d6:5e:4e}
	I0829 18:06:27.338315   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined IP address 192.168.39.206 and MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:06:27.338485   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHPort
	I0829 18:06:27.338644   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHKeyPath
	I0829 18:06:27.338967   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHUsername
	I0829 18:06:27.339013   20885 host.go:66] Checking if "addons-661794" exists ...
	I0829 18:06:27.339153   20885 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13071/.minikube/machines/addons-661794/id_rsa Username:docker}
	I0829 18:06:27.339405   20885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0829 18:06:27.339448   20885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:27.341371   20885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36117
	I0829 18:06:27.341715   20885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46453
	I0829 18:06:27.342206   20885 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:27.342657   20885 main.go:141] libmachine: Using API Version  1
	I0829 18:06:27.342687   20885 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:27.343017   20885 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:27.343569   20885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0829 18:06:27.343605   20885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:27.343897   20885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46883
	I0829 18:06:27.344198   20885 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:27.344656   20885 main.go:141] libmachine: Using API Version  1
	I0829 18:06:27.344671   20885 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:27.344958   20885 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:27.345037   20885 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:27.345580   20885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0829 18:06:27.345603   20885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:27.346026   20885 main.go:141] libmachine: Using API Version  1
	I0829 18:06:27.346042   20885 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:27.346539   20885 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:27.347123   20885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0829 18:06:27.347156   20885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:27.351909   20885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34473
	I0829 18:06:27.352396   20885 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:27.352975   20885 main.go:141] libmachine: Using API Version  1
	I0829 18:06:27.352991   20885 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:27.353556   20885 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:27.354283   20885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0829 18:06:27.354337   20885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:27.360402   20885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46659
	I0829 18:06:27.360858   20885 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:27.361277   20885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41201
	I0829 18:06:27.361387   20885 main.go:141] libmachine: Using API Version  1
	I0829 18:06:27.361401   20885 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:27.361745   20885 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:27.361803   20885 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:27.362045   20885 main.go:141] libmachine: (addons-661794) Calling .GetState
	I0829 18:06:27.362304   20885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34043
	I0829 18:06:27.362651   20885 main.go:141] libmachine: Using API Version  1
	I0829 18:06:27.362675   20885 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:27.362737   20885 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:27.363320   20885 main.go:141] libmachine: Using API Version  1
	I0829 18:06:27.363349   20885 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:27.363491   20885 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:27.363786   20885 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:27.364058   20885 main.go:141] libmachine: (addons-661794) Calling .GetState
	I0829 18:06:27.364125   20885 main.go:141] libmachine: (addons-661794) Calling .DriverName
	I0829 18:06:27.365319   20885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37411
	I0829 18:06:27.365765   20885 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:27.365947   20885 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0829 18:06:27.366113   20885 main.go:141] libmachine: (addons-661794) Calling .DriverName
	I0829 18:06:27.366925   20885 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0829 18:06:27.366951   20885 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0829 18:06:27.366972   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHHostname
	I0829 18:06:27.367286   20885 main.go:141] libmachine: Using API Version  1
	I0829 18:06:27.367303   20885 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:27.367704   20885 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:27.368150   20885 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0829 18:06:27.368412   20885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0829 18:06:27.368446   20885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:27.368616   20885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39799
	I0829 18:06:27.369202   20885 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:27.369756   20885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0829 18:06:27.369794   20885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:27.369759   20885 main.go:141] libmachine: Using API Version  1
	I0829 18:06:27.369859   20885 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:27.370654   20885 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:27.370718   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:06:27.371338   20885 main.go:141] libmachine: (addons-661794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:5e:4e", ip: ""} in network mk-addons-661794: {Iface:virbr1 ExpiryTime:2024-08-29 19:05:47 +0000 UTC Type:0 Mac:52:54:00:d6:5e:4e Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-661794 Clientid:01:52:54:00:d6:5e:4e}
	I0829 18:06:27.371363   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined IP address 192.168.39.206 and MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:06:27.371377   20885 main.go:141] libmachine: (addons-661794) Calling .DriverName
	I0829 18:06:27.371494   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHPort
	I0829 18:06:27.371687   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHKeyPath
	I0829 18:06:27.371729   20885 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0829 18:06:27.371849   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHUsername
	I0829 18:06:27.371995   20885 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13071/.minikube/machines/addons-661794/id_rsa Username:docker}
	I0829 18:06:27.375062   20885 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0829 18:06:27.377534   20885 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0829 18:06:27.377561   20885 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0829 18:06:27.377584   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHHostname
	I0829 18:06:27.381490   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:06:27.381971   20885 main.go:141] libmachine: (addons-661794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:5e:4e", ip: ""} in network mk-addons-661794: {Iface:virbr1 ExpiryTime:2024-08-29 19:05:47 +0000 UTC Type:0 Mac:52:54:00:d6:5e:4e Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-661794 Clientid:01:52:54:00:d6:5e:4e}
	I0829 18:06:27.382001   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined IP address 192.168.39.206 and MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:06:27.382158   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHPort
	I0829 18:06:27.382367   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHKeyPath
	I0829 18:06:27.382487   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHUsername
	I0829 18:06:27.382634   20885 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13071/.minikube/machines/addons-661794/id_rsa Username:docker}
	I0829 18:06:27.384761   20885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43581
	I0829 18:06:27.385285   20885 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:27.385872   20885 main.go:141] libmachine: Using API Version  1
	I0829 18:06:27.385890   20885 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:27.386384   20885 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:27.386582   20885 main.go:141] libmachine: (addons-661794) Calling .GetState
	I0829 18:06:27.386663   20885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42317
	I0829 18:06:27.388830   20885 main.go:141] libmachine: (addons-661794) Calling .DriverName
	I0829 18:06:27.388892   20885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35985
	I0829 18:06:27.389315   20885 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:27.390295   20885 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:27.390608   20885 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0829 18:06:27.390608   20885 main.go:141] libmachine: Using API Version  1
	I0829 18:06:27.390686   20885 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:27.391064   20885 main.go:141] libmachine: Using API Version  1
	I0829 18:06:27.391075   20885 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:27.391114   20885 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:27.391375   20885 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:27.391839   20885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0829 18:06:27.391873   20885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:27.391877   20885 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0829 18:06:27.391888   20885 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0829 18:06:27.391905   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHHostname
	I0829 18:06:27.391978   20885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39191
	I0829 18:06:27.392153   20885 main.go:141] libmachine: (addons-661794) Calling .GetState
	I0829 18:06:27.392567   20885 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:27.392827   20885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45429
	I0829 18:06:27.393059   20885 main.go:141] libmachine: Using API Version  1
	I0829 18:06:27.393077   20885 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:27.395007   20885 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:27.395547   20885 main.go:141] libmachine: (addons-661794) Calling .GetState
	I0829 18:06:27.395716   20885 main.go:141] libmachine: (addons-661794) Calling .DriverName
	I0829 18:06:27.395941   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:06:27.396338   20885 main.go:141] libmachine: (addons-661794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:5e:4e", ip: ""} in network mk-addons-661794: {Iface:virbr1 ExpiryTime:2024-08-29 19:05:47 +0000 UTC Type:0 Mac:52:54:00:d6:5e:4e Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-661794 Clientid:01:52:54:00:d6:5e:4e}
	I0829 18:06:27.396362   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined IP address 192.168.39.206 and MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:06:27.396556   20885 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:27.396634   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHPort
	I0829 18:06:27.396810   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHKeyPath
	I0829 18:06:27.396969   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHUsername
	I0829 18:06:27.397120   20885 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13071/.minikube/machines/addons-661794/id_rsa Username:docker}
	I0829 18:06:27.397565   20885 main.go:141] libmachine: (addons-661794) Calling .DriverName
	I0829 18:06:27.397757   20885 out.go:177]   - Using image docker.io/registry:2.8.3
	I0829 18:06:27.397810   20885 main.go:141] libmachine: Using API Version  1
	I0829 18:06:27.397824   20885 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:27.398529   20885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46309
	I0829 18:06:27.398770   20885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38361
	I0829 18:06:27.399026   20885 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:27.399090   20885 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0829 18:06:27.399378   20885 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:27.399559   20885 main.go:141] libmachine: (addons-661794) Calling .GetState
	I0829 18:06:27.400216   20885 main.go:141] libmachine: Using API Version  1
	I0829 18:06:27.400234   20885 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:27.400295   20885 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:27.400665   20885 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0829 18:06:27.400981   20885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35315
	I0829 18:06:27.401049   20885 main.go:141] libmachine: Using API Version  1
	I0829 18:06:27.401062   20885 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:27.401083   20885 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:27.401151   20885 main.go:141] libmachine: (addons-661794) Calling .DriverName
	I0829 18:06:27.401347   20885 main.go:141] libmachine: (addons-661794) Calling .GetState
	I0829 18:06:27.401449   20885 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:27.401626   20885 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:27.401754   20885 main.go:141] libmachine: (addons-661794) Calling .GetState
	I0829 18:06:27.401753   20885 out.go:177]   - Using image docker.io/busybox:stable
	I0829 18:06:27.401927   20885 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0829 18:06:27.401950   20885 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0829 18:06:27.401968   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHHostname
	I0829 18:06:27.402078   20885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42539
	I0829 18:06:27.402312   20885 main.go:141] libmachine: Using API Version  1
	I0829 18:06:27.402325   20885 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:27.402782   20885 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:27.402905   20885 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0829 18:06:27.403189   20885 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0829 18:06:27.403207   20885 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0829 18:06:27.403225   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHHostname
	I0829 18:06:27.403480   20885 main.go:141] libmachine: Using API Version  1
	I0829 18:06:27.403497   20885 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:27.403550   20885 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:27.403967   20885 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:27.404020   20885 main.go:141] libmachine: (addons-661794) Calling .GetState
	I0829 18:06:27.404067   20885 main.go:141] libmachine: (addons-661794) Calling .DriverName
	I0829 18:06:27.404237   20885 main.go:141] libmachine: (addons-661794) Calling .DriverName
	I0829 18:06:27.404417   20885 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0829 18:06:27.404434   20885 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0829 18:06:27.404455   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHHostname
	I0829 18:06:27.404538   20885 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0829 18:06:27.404554   20885 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0829 18:06:27.404568   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHHostname
	I0829 18:06:27.405335   20885 main.go:141] libmachine: (addons-661794) Calling .GetState
	I0829 18:06:27.406068   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:06:27.406611   20885 main.go:141] libmachine: (addons-661794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:5e:4e", ip: ""} in network mk-addons-661794: {Iface:virbr1 ExpiryTime:2024-08-29 19:05:47 +0000 UTC Type:0 Mac:52:54:00:d6:5e:4e Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-661794 Clientid:01:52:54:00:d6:5e:4e}
	I0829 18:06:27.406631   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined IP address 192.168.39.206 and MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:06:27.406956   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHPort
	I0829 18:06:27.407160   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHKeyPath
	I0829 18:06:27.407417   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHUsername
	I0829 18:06:27.407547   20885 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0829 18:06:27.407692   20885 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13071/.minikube/machines/addons-661794/id_rsa Username:docker}
	I0829 18:06:27.407980   20885 main.go:141] libmachine: (addons-661794) Calling .DriverName
	I0829 18:06:27.408047   20885 main.go:141] libmachine: (addons-661794) Calling .DriverName
	I0829 18:06:27.409099   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:06:27.409198   20885 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0829 18:06:27.409273   20885 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0829 18:06:27.409287   20885 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0829 18:06:27.409303   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHHostname
	I0829 18:06:27.409312   20885 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0829 18:06:27.409439   20885 main.go:141] libmachine: (addons-661794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:5e:4e", ip: ""} in network mk-addons-661794: {Iface:virbr1 ExpiryTime:2024-08-29 19:05:47 +0000 UTC Type:0 Mac:52:54:00:d6:5e:4e Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-661794 Clientid:01:52:54:00:d6:5e:4e}
	I0829 18:06:27.409462   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined IP address 192.168.39.206 and MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:06:27.409593   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHPort
	I0829 18:06:27.409783   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHKeyPath
	I0829 18:06:27.409839   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:06:27.410104   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHUsername
	I0829 18:06:27.410277   20885 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13071/.minikube/machines/addons-661794/id_rsa Username:docker}
	I0829 18:06:27.410570   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:06:27.410611   20885 main.go:141] libmachine: (addons-661794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:5e:4e", ip: ""} in network mk-addons-661794: {Iface:virbr1 ExpiryTime:2024-08-29 19:05:47 +0000 UTC Type:0 Mac:52:54:00:d6:5e:4e Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-661794 Clientid:01:52:54:00:d6:5e:4e}
	I0829 18:06:27.410627   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined IP address 192.168.39.206 and MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:06:27.410737   20885 main.go:141] libmachine: (addons-661794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:5e:4e", ip: ""} in network mk-addons-661794: {Iface:virbr1 ExpiryTime:2024-08-29 19:05:47 +0000 UTC Type:0 Mac:52:54:00:d6:5e:4e Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-661794 Clientid:01:52:54:00:d6:5e:4e}
	I0829 18:06:27.410811   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined IP address 192.168.39.206 and MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:06:27.410770   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHPort
	I0829 18:06:27.410878   20885 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0829 18:06:27.410892   20885 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0829 18:06:27.410908   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHHostname
	I0829 18:06:27.410963   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHKeyPath
	I0829 18:06:27.411005   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHPort
	I0829 18:06:27.411113   20885 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0829 18:06:27.411126   20885 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0829 18:06:27.411140   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHHostname
	I0829 18:06:27.411175   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHUsername
	I0829 18:06:27.411240   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHKeyPath
	I0829 18:06:27.411676   20885 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13071/.minikube/machines/addons-661794/id_rsa Username:docker}
	I0829 18:06:27.411772   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHUsername
	I0829 18:06:27.412164   20885 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13071/.minikube/machines/addons-661794/id_rsa Username:docker}
	I0829 18:06:27.413007   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:06:27.413522   20885 main.go:141] libmachine: (addons-661794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:5e:4e", ip: ""} in network mk-addons-661794: {Iface:virbr1 ExpiryTime:2024-08-29 19:05:47 +0000 UTC Type:0 Mac:52:54:00:d6:5e:4e Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-661794 Clientid:01:52:54:00:d6:5e:4e}
	I0829 18:06:27.413554   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined IP address 192.168.39.206 and MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:06:27.413717   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHPort
	I0829 18:06:27.413885   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHKeyPath
	I0829 18:06:27.414045   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHUsername
	I0829 18:06:27.414160   20885 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13071/.minikube/machines/addons-661794/id_rsa Username:docker}
	I0829 18:06:27.414667   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:06:27.415094   20885 main.go:141] libmachine: (addons-661794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:5e:4e", ip: ""} in network mk-addons-661794: {Iface:virbr1 ExpiryTime:2024-08-29 19:05:47 +0000 UTC Type:0 Mac:52:54:00:d6:5e:4e Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-661794 Clientid:01:52:54:00:d6:5e:4e}
	I0829 18:06:27.415133   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined IP address 192.168.39.206 and MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:06:27.415159   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:06:27.415221   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHPort
	I0829 18:06:27.415393   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHKeyPath
	I0829 18:06:27.415600   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHUsername
	I0829 18:06:27.415659   20885 main.go:141] libmachine: (addons-661794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:5e:4e", ip: ""} in network mk-addons-661794: {Iface:virbr1 ExpiryTime:2024-08-29 19:05:47 +0000 UTC Type:0 Mac:52:54:00:d6:5e:4e Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-661794 Clientid:01:52:54:00:d6:5e:4e}
	I0829 18:06:27.415677   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined IP address 192.168.39.206 and MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:06:27.415765   20885 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13071/.minikube/machines/addons-661794/id_rsa Username:docker}
	I0829 18:06:27.415811   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHPort
	I0829 18:06:27.415934   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHKeyPath
	I0829 18:06:27.416044   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHUsername
	I0829 18:06:27.416150   20885 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13071/.minikube/machines/addons-661794/id_rsa Username:docker}
	I0829 18:06:27.425711   20885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40307
	I0829 18:06:27.430829   20885 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:27.431439   20885 main.go:141] libmachine: Using API Version  1
	I0829 18:06:27.431462   20885 main.go:141] libmachine: () Calling .SetConfigRaw
	W0829 18:06:27.431608   20885 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0829 18:06:27.431634   20885 retry.go:31] will retry after 178.391817ms: ssh: handshake failed: EOF
	I0829 18:06:27.431830   20885 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:27.432042   20885 main.go:141] libmachine: (addons-661794) Calling .GetState
	I0829 18:06:27.433574   20885 main.go:141] libmachine: (addons-661794) Calling .DriverName
	I0829 18:06:27.435409   20885 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0829 18:06:27.436993   20885 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0829 18:06:27.438361   20885 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0829 18:06:27.440313   20885 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0829 18:06:27.440341   20885 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0829 18:06:27.440365   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHHostname
	I0829 18:06:27.443581   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:06:27.443935   20885 main.go:141] libmachine: (addons-661794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:5e:4e", ip: ""} in network mk-addons-661794: {Iface:virbr1 ExpiryTime:2024-08-29 19:05:47 +0000 UTC Type:0 Mac:52:54:00:d6:5e:4e Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-661794 Clientid:01:52:54:00:d6:5e:4e}
	I0829 18:06:27.443959   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined IP address 192.168.39.206 and MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:06:27.444054   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHPort
	I0829 18:06:27.444245   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHKeyPath
	I0829 18:06:27.444395   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHUsername
	I0829 18:06:27.444508   20885 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13071/.minikube/machines/addons-661794/id_rsa Username:docker}
	W0829 18:06:27.612696   20885 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:46154->192.168.39.206:22: read: connection reset by peer
	I0829 18:06:27.612731   20885 retry.go:31] will retry after 385.79808ms: ssh: handshake failed: read tcp 192.168.39.1:46154->192.168.39.206:22: read: connection reset by peer
	I0829 18:06:27.816836   20885 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 18:06:27.816898   20885 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0829 18:06:27.846538   20885 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0829 18:06:27.846560   20885 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0829 18:06:27.850626   20885 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0829 18:06:27.850645   20885 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0829 18:06:27.877788   20885 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0829 18:06:27.877812   20885 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0829 18:06:27.909241   20885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0829 18:06:27.925442   20885 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0829 18:06:27.925465   20885 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0829 18:06:27.997412   20885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0829 18:06:28.011405   20885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0829 18:06:28.022650   20885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0829 18:06:28.042237   20885 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0829 18:06:28.042276   20885 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0829 18:06:28.046687   20885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0829 18:06:28.048361   20885 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0829 18:06:28.048379   20885 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0829 18:06:28.062360   20885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0829 18:06:28.067658   20885 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0829 18:06:28.067676   20885 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0829 18:06:28.084359   20885 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0829 18:06:28.084378   20885 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0829 18:06:28.089292   20885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 18:06:28.106415   20885 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0829 18:06:28.106435   20885 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0829 18:06:28.113527   20885 node_ready.go:35] waiting up to 6m0s for node "addons-661794" to be "Ready" ...
	I0829 18:06:28.116762   20885 node_ready.go:49] node "addons-661794" has status "Ready":"True"
	I0829 18:06:28.116785   20885 node_ready.go:38] duration metric: took 3.223726ms for node "addons-661794" to be "Ready" ...
	I0829 18:06:28.116793   20885 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 18:06:28.124663   20885 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-hr6j2" in "kube-system" namespace to be "Ready" ...
	I0829 18:06:28.146358   20885 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0829 18:06:28.146407   20885 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0829 18:06:28.154418   20885 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0829 18:06:28.154445   20885 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0829 18:06:28.221809   20885 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0829 18:06:28.221838   20885 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0829 18:06:28.251959   20885 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0829 18:06:28.251988   20885 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0829 18:06:28.300902   20885 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0829 18:06:28.300928   20885 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0829 18:06:28.332357   20885 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0829 18:06:28.332381   20885 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0829 18:06:28.370668   20885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0829 18:06:28.382769   20885 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0829 18:06:28.382796   20885 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0829 18:06:28.405269   20885 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0829 18:06:28.405292   20885 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0829 18:06:28.501996   20885 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0829 18:06:28.502024   20885 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0829 18:06:28.521977   20885 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 18:06:28.522022   20885 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0829 18:06:28.594785   20885 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0829 18:06:28.594809   20885 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0829 18:06:28.631933   20885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0829 18:06:28.645522   20885 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0829 18:06:28.645553   20885 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0829 18:06:28.684052   20885 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0829 18:06:28.684070   20885 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0829 18:06:28.719335   20885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0829 18:06:28.749775   20885 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0829 18:06:28.749797   20885 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0829 18:06:28.784221   20885 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0829 18:06:28.784239   20885 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0829 18:06:28.789774   20885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 18:06:28.819534   20885 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0829 18:06:28.819553   20885 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0829 18:06:28.887033   20885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0829 18:06:28.917304   20885 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0829 18:06:28.917325   20885 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0829 18:06:28.997789   20885 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0829 18:06:28.997812   20885 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0829 18:06:29.032268   20885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0829 18:06:29.094513   20885 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0829 18:06:29.094542   20885 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0829 18:06:29.234832   20885 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0829 18:06:29.234853   20885 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0829 18:06:29.478508   20885 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0829 18:06:29.478537   20885 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0829 18:06:29.864878   20885 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0829 18:06:29.864906   20885 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0829 18:06:30.042232   20885 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0829 18:06:30.042261   20885 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0829 18:06:30.130925   20885 pod_ready.go:103] pod "coredns-6f6b679f8f-hr6j2" in "kube-system" namespace has status "Ready":"False"
	I0829 18:06:30.191964   20885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0829 18:06:30.270727   20885 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.453798273s)
	I0829 18:06:30.270760   20885 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0829 18:06:30.271645   20885 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.362374669s)
	I0829 18:06:30.271668   20885 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.274229529s)
	I0829 18:06:30.271696   20885 main.go:141] libmachine: Making call to close driver server
	I0829 18:06:30.271697   20885 main.go:141] libmachine: Making call to close driver server
	I0829 18:06:30.271711   20885 main.go:141] libmachine: (addons-661794) Calling .Close
	I0829 18:06:30.271713   20885 main.go:141] libmachine: (addons-661794) Calling .Close
	I0829 18:06:30.273296   20885 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:06:30.273302   20885 main.go:141] libmachine: (addons-661794) DBG | Closing plugin on server side
	I0829 18:06:30.273313   20885 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:06:30.273333   20885 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:06:30.273359   20885 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:06:30.273363   20885 main.go:141] libmachine: Making call to close driver server
	I0829 18:06:30.273371   20885 main.go:141] libmachine: Making call to close driver server
	I0829 18:06:30.273380   20885 main.go:141] libmachine: (addons-661794) Calling .Close
	I0829 18:06:30.273380   20885 main.go:141] libmachine: (addons-661794) Calling .Close
	I0829 18:06:30.273587   20885 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:06:30.273601   20885 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:06:30.273681   20885 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:06:30.273690   20885 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:06:30.352878   20885 main.go:141] libmachine: Making call to close driver server
	I0829 18:06:30.352905   20885 main.go:141] libmachine: (addons-661794) Calling .Close
	I0829 18:06:30.353191   20885 main.go:141] libmachine: (addons-661794) DBG | Closing plugin on server side
	I0829 18:06:30.353216   20885 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:06:30.353229   20885 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:06:30.500112   20885 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0829 18:06:30.500142   20885 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0829 18:06:30.819123   20885 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-661794" context rescaled to 1 replicas
	I0829 18:06:30.902403   20885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0829 18:06:32.182662   20885 pod_ready.go:103] pod "coredns-6f6b679f8f-hr6j2" in "kube-system" namespace has status "Ready":"False"
	I0829 18:06:33.519721   20885 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.508274533s)
	I0829 18:06:33.519738   20885 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.497061475s)
	I0829 18:06:33.519773   20885 main.go:141] libmachine: Making call to close driver server
	I0829 18:06:33.519786   20885 main.go:141] libmachine: (addons-661794) Calling .Close
	I0829 18:06:33.519857   20885 main.go:141] libmachine: Making call to close driver server
	I0829 18:06:33.519892   20885 main.go:141] libmachine: (addons-661794) Calling .Close
	I0829 18:06:33.520055   20885 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:06:33.520069   20885 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:06:33.520079   20885 main.go:141] libmachine: Making call to close driver server
	I0829 18:06:33.520087   20885 main.go:141] libmachine: (addons-661794) Calling .Close
	I0829 18:06:33.520173   20885 main.go:141] libmachine: (addons-661794) DBG | Closing plugin on server side
	I0829 18:06:33.520202   20885 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:06:33.520209   20885 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:06:33.520216   20885 main.go:141] libmachine: Making call to close driver server
	I0829 18:06:33.520227   20885 main.go:141] libmachine: (addons-661794) Calling .Close
	I0829 18:06:33.520312   20885 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:06:33.520314   20885 main.go:141] libmachine: (addons-661794) DBG | Closing plugin on server side
	I0829 18:06:33.520322   20885 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:06:33.520567   20885 main.go:141] libmachine: (addons-661794) DBG | Closing plugin on server side
	I0829 18:06:33.520594   20885 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:06:33.520612   20885 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:06:33.590426   20885 main.go:141] libmachine: Making call to close driver server
	I0829 18:06:33.590448   20885 main.go:141] libmachine: (addons-661794) Calling .Close
	I0829 18:06:33.590786   20885 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:06:33.590811   20885 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:06:34.380357   20885 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0829 18:06:34.380392   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHHostname
	I0829 18:06:34.383713   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:06:34.384128   20885 main.go:141] libmachine: (addons-661794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:5e:4e", ip: ""} in network mk-addons-661794: {Iface:virbr1 ExpiryTime:2024-08-29 19:05:47 +0000 UTC Type:0 Mac:52:54:00:d6:5e:4e Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-661794 Clientid:01:52:54:00:d6:5e:4e}
	I0829 18:06:34.384163   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined IP address 192.168.39.206 and MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:06:34.384389   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHPort
	I0829 18:06:34.384639   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHKeyPath
	I0829 18:06:34.384797   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHUsername
	I0829 18:06:34.384963   20885 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13071/.minikube/machines/addons-661794/id_rsa Username:docker}
	I0829 18:06:34.633320   20885 pod_ready.go:103] pod "coredns-6f6b679f8f-hr6j2" in "kube-system" namespace has status "Ready":"False"
	I0829 18:06:35.203524   20885 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0829 18:06:35.545778   20885 addons.go:234] Setting addon gcp-auth=true in "addons-661794"
	I0829 18:06:35.545840   20885 host.go:66] Checking if "addons-661794" exists ...
	I0829 18:06:35.546210   20885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0829 18:06:35.546239   20885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:35.561744   20885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40755
	I0829 18:06:35.562262   20885 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:35.562734   20885 main.go:141] libmachine: Using API Version  1
	I0829 18:06:35.562759   20885 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:35.563117   20885 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:35.563691   20885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0829 18:06:35.563733   20885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:35.580003   20885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46485
	I0829 18:06:35.580493   20885 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:35.580990   20885 main.go:141] libmachine: Using API Version  1
	I0829 18:06:35.581013   20885 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:35.581380   20885 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:35.581568   20885 main.go:141] libmachine: (addons-661794) Calling .GetState
	I0829 18:06:35.583525   20885 main.go:141] libmachine: (addons-661794) Calling .DriverName
	I0829 18:06:35.583765   20885 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0829 18:06:35.583786   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHHostname
	I0829 18:06:35.586781   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:06:35.587193   20885 main.go:141] libmachine: (addons-661794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:5e:4e", ip: ""} in network mk-addons-661794: {Iface:virbr1 ExpiryTime:2024-08-29 19:05:47 +0000 UTC Type:0 Mac:52:54:00:d6:5e:4e Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-661794 Clientid:01:52:54:00:d6:5e:4e}
	I0829 18:06:35.587218   20885 main.go:141] libmachine: (addons-661794) DBG | domain addons-661794 has defined IP address 192.168.39.206 and MAC address 52:54:00:d6:5e:4e in network mk-addons-661794
	I0829 18:06:35.587426   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHPort
	I0829 18:06:35.587626   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHKeyPath
	I0829 18:06:35.587801   20885 main.go:141] libmachine: (addons-661794) Calling .GetSSHUsername
	I0829 18:06:35.587948   20885 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13071/.minikube/machines/addons-661794/id_rsa Username:docker}
	I0829 18:06:36.705183   20885 pod_ready.go:103] pod "coredns-6f6b679f8f-hr6j2" in "kube-system" namespace has status "Ready":"False"
	I0829 18:06:39.243370   20885 pod_ready.go:98] pod "coredns-6f6b679f8f-hr6j2" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-29 18:06:38 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-29 18:06:27 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-29 18:06:27 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-29 18:06:27 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-29 18:06:27 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.206 HostIPs:[{IP:192.168.39
.206}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-08-29 18:06:27 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-08-29 18:06:30 +0000 UTC,FinishedAt:2024-08-29 18:06:37 +0000 UTC,ContainerID:docker://965a80eebaba03f3ea03c5a25a8be4854f8724f15c4d5bed2c520b03f05acee8,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1 ContainerID:docker://965a80eebaba03f3ea03c5a25a8be4854f8724f15c4d5bed2c520b03f05acee8 Started:0xc00243dc80 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0025001c0} {Name:kube-api-access-v2mlr MountPath:/var/run/secrets/kubernetes.io/serviceacc
ount ReadOnly:true RecursiveReadOnly:0xc0025001d0}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0829 18:06:39.243402   20885 pod_ready.go:82] duration metric: took 11.118707009s for pod "coredns-6f6b679f8f-hr6j2" in "kube-system" namespace to be "Ready" ...
	E0829 18:06:39.243417   20885 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-6f6b679f8f-hr6j2" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-29 18:06:38 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-29 18:06:27 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-29 18:06:27 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-29 18:06:27 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-29 18:06:27 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.206 HostIPs:[{IP:192.168.39.206}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-08-29 18:06:27 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-08-29 18:06:30 +0000 UTC,FinishedAt:2024-08-29 18:06:37 +0000 UTC,ContainerID:docker://965a80eebaba03f3ea03c5a25a8be4854f8724f15c4d5bed2c520b03f05acee8,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1 ContainerID:docker://965a80eebaba03f3ea03c5a25a8be4854f8724f15c4d5bed2c520b03f05acee8 Started:0xc00243dc80 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0025001c0} {Name:kube-api-access-v2mlr MountPath:/var/run/sec
rets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc0025001d0}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0829 18:06:39.243428   20885 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-ktqqc" in "kube-system" namespace to be "Ready" ...
	I0829 18:06:39.325378   20885 pod_ready.go:93] pod "coredns-6f6b679f8f-ktqqc" in "kube-system" namespace has status "Ready":"True"
	I0829 18:06:39.325400   20885 pod_ready.go:82] duration metric: took 81.964427ms for pod "coredns-6f6b679f8f-ktqqc" in "kube-system" namespace to be "Ready" ...
	I0829 18:06:39.325412   20885 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-661794" in "kube-system" namespace to be "Ready" ...
	I0829 18:06:39.371722   20885 pod_ready.go:93] pod "etcd-addons-661794" in "kube-system" namespace has status "Ready":"True"
	I0829 18:06:39.371759   20885 pod_ready.go:82] duration metric: took 46.33845ms for pod "etcd-addons-661794" in "kube-system" namespace to be "Ready" ...
	I0829 18:06:39.371775   20885 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-661794" in "kube-system" namespace to be "Ready" ...
	I0829 18:06:39.391934   20885 pod_ready.go:93] pod "kube-apiserver-addons-661794" in "kube-system" namespace has status "Ready":"True"
	I0829 18:06:39.391961   20885 pod_ready.go:82] duration metric: took 20.178025ms for pod "kube-apiserver-addons-661794" in "kube-system" namespace to be "Ready" ...
	I0829 18:06:39.391973   20885 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-661794" in "kube-system" namespace to be "Ready" ...
	I0829 18:06:39.449134   20885 pod_ready.go:93] pod "kube-controller-manager-addons-661794" in "kube-system" namespace has status "Ready":"True"
	I0829 18:06:39.449172   20885 pod_ready.go:82] duration metric: took 57.189342ms for pod "kube-controller-manager-addons-661794" in "kube-system" namespace to be "Ready" ...
	I0829 18:06:39.449186   20885 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-p6jxm" in "kube-system" namespace to be "Ready" ...
	I0829 18:06:39.581833   20885 pod_ready.go:93] pod "kube-proxy-p6jxm" in "kube-system" namespace has status "Ready":"True"
	I0829 18:06:39.581867   20885 pod_ready.go:82] duration metric: took 132.672296ms for pod "kube-proxy-p6jxm" in "kube-system" namespace to be "Ready" ...
	I0829 18:06:39.581881   20885 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-661794" in "kube-system" namespace to be "Ready" ...
	I0829 18:06:39.957349   20885 pod_ready.go:93] pod "kube-scheduler-addons-661794" in "kube-system" namespace has status "Ready":"True"
	I0829 18:06:39.957381   20885 pod_ready.go:82] duration metric: took 375.491628ms for pod "kube-scheduler-addons-661794" in "kube-system" namespace to be "Ready" ...
	I0829 18:06:39.957390   20885 pod_ready.go:39] duration metric: took 11.840586701s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 18:06:39.957421   20885 api_server.go:52] waiting for apiserver process to appear ...
	I0829 18:06:39.957472   20885 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 18:06:40.281753   20885 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (12.219369019s)
	I0829 18:06:40.281800   20885 main.go:141] libmachine: Making call to close driver server
	I0829 18:06:40.281812   20885 main.go:141] libmachine: (addons-661794) Calling .Close
	I0829 18:06:40.281872   20885 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (12.192550728s)
	I0829 18:06:40.281911   20885 main.go:141] libmachine: Making call to close driver server
	I0829 18:06:40.281974   20885 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (12.235262636s)
	I0829 18:06:40.282001   20885 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (11.65001466s)
	I0829 18:06:40.281992   20885 main.go:141] libmachine: (addons-661794) Calling .Close
	I0829 18:06:40.282047   20885 main.go:141] libmachine: (addons-661794) DBG | Closing plugin on server side
	I0829 18:06:40.282056   20885 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:06:40.282061   20885 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (11.562693463s)
	I0829 18:06:40.282065   20885 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:06:40.282078   20885 main.go:141] libmachine: Making call to close driver server
	I0829 18:06:40.282087   20885 main.go:141] libmachine: (addons-661794) Calling .Close
	I0829 18:06:40.282088   20885 main.go:141] libmachine: Making call to close driver server
	I0829 18:06:40.282100   20885 main.go:141] libmachine: (addons-661794) Calling .Close
	I0829 18:06:40.282020   20885 main.go:141] libmachine: Making call to close driver server
	I0829 18:06:40.282146   20885 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (11.492346392s)
	I0829 18:06:40.282166   20885 main.go:141] libmachine: (addons-661794) Calling .Close
	I0829 18:06:40.282021   20885 main.go:141] libmachine: Making call to close driver server
	I0829 18:06:40.282181   20885 main.go:141] libmachine: (addons-661794) Calling .Close
	I0829 18:06:40.282207   20885 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (11.395140701s)
	I0829 18:06:40.282166   20885 main.go:141] libmachine: Making call to close driver server
	I0829 18:06:40.282227   20885 main.go:141] libmachine: (addons-661794) Calling .Close
	I0829 18:06:40.282232   20885 main.go:141] libmachine: Making call to close driver server
	I0829 18:06:40.282243   20885 main.go:141] libmachine: (addons-661794) Calling .Close
	I0829 18:06:40.282281   20885 main.go:141] libmachine: (addons-661794) DBG | Closing plugin on server side
	I0829 18:06:40.282315   20885 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (11.250015339s)
	I0829 18:06:40.282317   20885 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:06:40.282329   20885 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:06:40.282337   20885 main.go:141] libmachine: Making call to close driver server
	I0829 18:06:40.282338   20885 main.go:141] libmachine: (addons-661794) DBG | Closing plugin on server side
	I0829 18:06:40.282343   20885 main.go:141] libmachine: (addons-661794) Calling .Close
	W0829 18:06:40.282342   20885 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0829 18:06:40.282380   20885 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (10.090380324s)
	I0829 18:06:40.282389   20885 retry.go:31] will retry after 326.85111ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0829 18:06:40.282358   20885 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:06:40.282398   20885 main.go:141] libmachine: Making call to close driver server
	I0829 18:06:40.282408   20885 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:06:40.282409   20885 main.go:141] libmachine: (addons-661794) Calling .Close
	I0829 18:06:40.282418   20885 addons.go:475] Verifying addon ingress=true in "addons-661794"
	I0829 18:06:40.282641   20885 main.go:141] libmachine: (addons-661794) DBG | Closing plugin on server side
	I0829 18:06:40.282661   20885 main.go:141] libmachine: (addons-661794) DBG | Closing plugin on server side
	I0829 18:06:40.282675   20885 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (9.380237821s)
	I0829 18:06:40.282703   20885 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:06:40.282715   20885 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:06:40.282723   20885 main.go:141] libmachine: Making call to close driver server
	I0829 18:06:40.282732   20885 main.go:141] libmachine: (addons-661794) Calling .Close
	I0829 18:06:40.282741   20885 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (4.698961574s)
	I0829 18:06:40.282704   20885 main.go:141] libmachine: Making call to close driver server
	I0829 18:06:40.282773   20885 main.go:141] libmachine: (addons-661794) Calling .Close
	I0829 18:06:40.281946   20885 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (11.911245915s)
	I0829 18:06:40.283370   20885 main.go:141] libmachine: Making call to close driver server
	I0829 18:06:40.283381   20885 main.go:141] libmachine: (addons-661794) Calling .Close
	I0829 18:06:40.283442   20885 main.go:141] libmachine: (addons-661794) DBG | Closing plugin on server side
	I0829 18:06:40.283463   20885 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:06:40.283470   20885 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:06:40.283478   20885 main.go:141] libmachine: Making call to close driver server
	I0829 18:06:40.283484   20885 main.go:141] libmachine: (addons-661794) Calling .Close
	I0829 18:06:40.283519   20885 main.go:141] libmachine: (addons-661794) DBG | Closing plugin on server side
	I0829 18:06:40.283535   20885 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:06:40.283542   20885 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:06:40.283870   20885 main.go:141] libmachine: (addons-661794) DBG | Closing plugin on server side
	I0829 18:06:40.283893   20885 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:06:40.283900   20885 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:06:40.283912   20885 main.go:141] libmachine: Making call to close driver server
	I0829 18:06:40.283918   20885 main.go:141] libmachine: (addons-661794) Calling .Close
	I0829 18:06:40.284664   20885 main.go:141] libmachine: (addons-661794) DBG | Closing plugin on server side
	I0829 18:06:40.284700   20885 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:06:40.284707   20885 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:06:40.284926   20885 main.go:141] libmachine: (addons-661794) DBG | Closing plugin on server side
	I0829 18:06:40.284950   20885 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:06:40.284957   20885 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:06:40.284966   20885 main.go:141] libmachine: Making call to close driver server
	I0829 18:06:40.284973   20885 main.go:141] libmachine: (addons-661794) Calling .Close
	I0829 18:06:40.285034   20885 main.go:141] libmachine: (addons-661794) DBG | Closing plugin on server side
	I0829 18:06:40.285054   20885 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:06:40.285060   20885 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:06:40.282685   20885 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:06:40.285233   20885 api_server.go:72] duration metric: took 13.048821646s to wait for apiserver process to appear ...
	I0829 18:06:40.285257   20885 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:06:40.285271   20885 main.go:141] libmachine: (addons-661794) DBG | Closing plugin on server side
	I0829 18:06:40.285292   20885 main.go:141] libmachine: (addons-661794) DBG | Closing plugin on server side
	I0829 18:06:40.285308   20885 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:06:40.285329   20885 main.go:141] libmachine: (addons-661794) DBG | Closing plugin on server side
	I0829 18:06:40.285348   20885 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:06:40.285781   20885 main.go:141] libmachine: (addons-661794) DBG | Closing plugin on server side
	I0829 18:06:40.285847   20885 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:06:40.286501   20885 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:06:40.286514   20885 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-661794"
	I0829 18:06:40.286649   20885 api_server.go:88] waiting for apiserver healthz status ...
	I0829 18:06:40.286723   20885 api_server.go:253] Checking apiserver healthz at https://192.168.39.206:8443/healthz ...
	I0829 18:06:40.287173   20885 out.go:177] * Verifying ingress addon...
	I0829 18:06:40.286681   20885 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:06:40.286804   20885 main.go:141] libmachine: (addons-661794) DBG | Closing plugin on server side
	I0829 18:06:40.287378   20885 main.go:141] libmachine: Making call to close driver server
	I0829 18:06:40.286816   20885 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:06:40.287388   20885 main.go:141] libmachine: (addons-661794) Calling .Close
	I0829 18:06:40.287395   20885 main.go:141] libmachine: Making call to close driver server
	I0829 18:06:40.287405   20885 main.go:141] libmachine: (addons-661794) Calling .Close
	I0829 18:06:40.286825   20885 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:06:40.287452   20885 main.go:141] libmachine: Making call to close driver server
	I0829 18:06:40.286781   20885 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:06:40.287458   20885 main.go:141] libmachine: (addons-661794) Calling .Close
	I0829 18:06:40.287463   20885 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:06:40.286743   20885 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:06:40.287504   20885 main.go:141] libmachine: Making call to close driver server
	I0829 18:06:40.287511   20885 main.go:141] libmachine: (addons-661794) Calling .Close
	I0829 18:06:40.287764   20885 main.go:141] libmachine: (addons-661794) DBG | Closing plugin on server side
	I0829 18:06:40.287768   20885 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:06:40.287781   20885 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:06:40.287791   20885 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:06:40.287799   20885 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:06:40.287806   20885 addons.go:475] Verifying addon registry=true in "addons-661794"
	I0829 18:06:40.287791   20885 addons.go:475] Verifying addon metrics-server=true in "addons-661794"
	I0829 18:06:40.288094   20885 main.go:141] libmachine: (addons-661794) DBG | Closing plugin on server side
	I0829 18:06:40.288124   20885 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:06:40.288132   20885 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:06:40.288408   20885 main.go:141] libmachine: (addons-661794) DBG | Closing plugin on server side
	I0829 18:06:40.288472   20885 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:06:40.288479   20885 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:06:40.289031   20885 out.go:177] * Verifying registry addon...
	I0829 18:06:40.289263   20885 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-661794 service yakd-dashboard -n yakd-dashboard
	
	I0829 18:06:40.289331   20885 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0829 18:06:40.289459   20885 out.go:177] * Verifying csi-hostpath-driver addon...
	I0829 18:06:40.290062   20885 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0829 18:06:40.290892   20885 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0829 18:06:40.292436   20885 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0829 18:06:40.293260   20885 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0829 18:06:40.295124   20885 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0829 18:06:40.295148   20885 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0829 18:06:40.356123   20885 api_server.go:279] https://192.168.39.206:8443/healthz returned 200:
	ok
	I0829 18:06:40.359170   20885 api_server.go:141] control plane version: v1.31.0
	I0829 18:06:40.359195   20885 api_server.go:131] duration metric: took 72.485251ms to wait for apiserver health ...
	I0829 18:06:40.359203   20885 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 18:06:40.370910   20885 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0829 18:06:40.370935   20885 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0829 18:06:40.389108   20885 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0829 18:06:40.389129   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:40.389733   20885 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0829 18:06:40.389758   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:40.390225   20885 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0829 18:06:40.390239   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:40.401379   20885 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0829 18:06:40.401402   20885 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0829 18:06:40.437270   20885 system_pods.go:59] 19 kube-system pods found
	I0829 18:06:40.437313   20885 system_pods.go:61] "coredns-6f6b679f8f-hr6j2" [49bbb5e7-a7eb-4ac6-bb4c-521168dbd2a9] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
	I0829 18:06:40.437321   20885 system_pods.go:61] "coredns-6f6b679f8f-ktqqc" [bd65086a-b80c-4ed2-8f45-6cf765f4c38e] Running
	I0829 18:06:40.437330   20885 system_pods.go:61] "csi-hostpath-attacher-0" [ff11c35c-6562-4603-9c36-b749dbb0be10] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0829 18:06:40.437337   20885 system_pods.go:61] "csi-hostpath-resizer-0" [ee6fb23d-d2ae-4762-9900-bf1d675d6566] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0829 18:06:40.437347   20885 system_pods.go:61] "csi-hostpathplugin-sqh9j" [f1d0cbf4-498f-4aa0-b657-f7b741d3624e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0829 18:06:40.437359   20885 system_pods.go:61] "etcd-addons-661794" [5bd50bac-a696-46b5-ab1a-cdb630f99c81] Running
	I0829 18:06:40.437365   20885 system_pods.go:61] "kube-apiserver-addons-661794" [5979768a-290f-4cfd-9683-881070b2401c] Running
	I0829 18:06:40.437370   20885 system_pods.go:61] "kube-controller-manager-addons-661794" [ab320c14-5504-49e0-a190-cc608914216f] Running
	I0829 18:06:40.437384   20885 system_pods.go:61] "kube-ingress-dns-minikube" [a6d185a9-baad-4704-98c4-3468afc954de] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0829 18:06:40.437394   20885 system_pods.go:61] "kube-proxy-p6jxm" [9cb3f450-b232-4312-9635-6074b2a48806] Running
	I0829 18:06:40.437402   20885 system_pods.go:61] "kube-scheduler-addons-661794" [bfe3e4ef-6d12-471d-8ce2-f227e546beff] Running
	I0829 18:06:40.437409   20885 system_pods.go:61] "metrics-server-8988944d9-g5zvg" [9052b9fb-3bb0-4669-ba65-8897461bb1b6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 18:06:40.437430   20885 system_pods.go:61] "nvidia-device-plugin-daemonset-lllsx" [caac2bdb-3473-4c6c-b776-5d7733ef7b03] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0829 18:06:40.437442   20885 system_pods.go:61] "registry-6fb4cdfc84-776mp" [3de6ff17-cf9f-4375-8344-461862b48005] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0829 18:06:40.437460   20885 system_pods.go:61] "registry-proxy-2d74h" [0872ecae-1d14-4b03-b5a7-3bed9bab8b7a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0829 18:06:40.437470   20885 system_pods.go:61] "snapshot-controller-56fcc65765-8jplq" [1934e699-6abe-416e-a80e-b5a4c2f0980f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0829 18:06:40.437480   20885 system_pods.go:61] "snapshot-controller-56fcc65765-mp7st" [156227cf-66d4-492c-a32a-5cc17e90a39b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0829 18:06:40.437488   20885 system_pods.go:61] "storage-provisioner" [5e39cb01-ed5f-4f5e-8ef2-86afc2df39f4] Running
	I0829 18:06:40.437496   20885 system_pods.go:61] "tiller-deploy-b48cc5f79-8h5wn" [30921fdb-1918-49e2-b8bf-a71f0a3cfc73] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0829 18:06:40.437507   20885 system_pods.go:74] duration metric: took 78.296359ms to wait for pod list to return data ...
	I0829 18:06:40.437520   20885 default_sa.go:34] waiting for default service account to be created ...
	I0829 18:06:40.439048   20885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0829 18:06:40.483725   20885 default_sa.go:45] found service account: "default"
	I0829 18:06:40.483756   20885 default_sa.go:55] duration metric: took 46.222085ms for default service account to be created ...
	I0829 18:06:40.483766   20885 system_pods.go:116] waiting for k8s-apps to be running ...
	I0829 18:06:40.610332   20885 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0829 18:06:40.651218   20885 system_pods.go:86] 19 kube-system pods found
	I0829 18:06:40.651266   20885 system_pods.go:89] "coredns-6f6b679f8f-hr6j2" [49bbb5e7-a7eb-4ac6-bb4c-521168dbd2a9] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
	I0829 18:06:40.651276   20885 system_pods.go:89] "coredns-6f6b679f8f-ktqqc" [bd65086a-b80c-4ed2-8f45-6cf765f4c38e] Running
	I0829 18:06:40.651288   20885 system_pods.go:89] "csi-hostpath-attacher-0" [ff11c35c-6562-4603-9c36-b749dbb0be10] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0829 18:06:40.651298   20885 system_pods.go:89] "csi-hostpath-resizer-0" [ee6fb23d-d2ae-4762-9900-bf1d675d6566] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0829 18:06:40.651312   20885 system_pods.go:89] "csi-hostpathplugin-sqh9j" [f1d0cbf4-498f-4aa0-b657-f7b741d3624e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0829 18:06:40.651323   20885 system_pods.go:89] "etcd-addons-661794" [5bd50bac-a696-46b5-ab1a-cdb630f99c81] Running
	I0829 18:06:40.651332   20885 system_pods.go:89] "kube-apiserver-addons-661794" [5979768a-290f-4cfd-9683-881070b2401c] Running
	I0829 18:06:40.651342   20885 system_pods.go:89] "kube-controller-manager-addons-661794" [ab320c14-5504-49e0-a190-cc608914216f] Running
	I0829 18:06:40.651355   20885 system_pods.go:89] "kube-ingress-dns-minikube" [a6d185a9-baad-4704-98c4-3468afc954de] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0829 18:06:40.651364   20885 system_pods.go:89] "kube-proxy-p6jxm" [9cb3f450-b232-4312-9635-6074b2a48806] Running
	I0829 18:06:40.651374   20885 system_pods.go:89] "kube-scheduler-addons-661794" [bfe3e4ef-6d12-471d-8ce2-f227e546beff] Running
	I0829 18:06:40.651386   20885 system_pods.go:89] "metrics-server-8988944d9-g5zvg" [9052b9fb-3bb0-4669-ba65-8897461bb1b6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 18:06:40.651399   20885 system_pods.go:89] "nvidia-device-plugin-daemonset-lllsx" [caac2bdb-3473-4c6c-b776-5d7733ef7b03] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0829 18:06:40.651418   20885 system_pods.go:89] "registry-6fb4cdfc84-776mp" [3de6ff17-cf9f-4375-8344-461862b48005] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0829 18:06:40.651431   20885 system_pods.go:89] "registry-proxy-2d74h" [0872ecae-1d14-4b03-b5a7-3bed9bab8b7a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0829 18:06:40.651444   20885 system_pods.go:89] "snapshot-controller-56fcc65765-8jplq" [1934e699-6abe-416e-a80e-b5a4c2f0980f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0829 18:06:40.651456   20885 system_pods.go:89] "snapshot-controller-56fcc65765-mp7st" [156227cf-66d4-492c-a32a-5cc17e90a39b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0829 18:06:40.651466   20885 system_pods.go:89] "storage-provisioner" [5e39cb01-ed5f-4f5e-8ef2-86afc2df39f4] Running
	I0829 18:06:40.651480   20885 system_pods.go:89] "tiller-deploy-b48cc5f79-8h5wn" [30921fdb-1918-49e2-b8bf-a71f0a3cfc73] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0829 18:06:40.651494   20885 system_pods.go:126] duration metric: took 167.721974ms to wait for k8s-apps to be running ...
	I0829 18:06:40.651516   20885 system_svc.go:44] waiting for kubelet service to be running ....
	I0829 18:06:40.651573   20885 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 18:06:40.814116   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:40.814349   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:40.814479   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:41.300091   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:41.300322   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:41.300468   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:41.805877   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:41.806134   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:41.806365   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:42.259252   20885 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.820158315s)
	I0829 18:06:42.259309   20885 main.go:141] libmachine: Making call to close driver server
	I0829 18:06:42.259320   20885 main.go:141] libmachine: (addons-661794) Calling .Close
	I0829 18:06:42.259628   20885 main.go:141] libmachine: (addons-661794) DBG | Closing plugin on server side
	I0829 18:06:42.259670   20885 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:06:42.259681   20885 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:06:42.259695   20885 main.go:141] libmachine: Making call to close driver server
	I0829 18:06:42.259714   20885 main.go:141] libmachine: (addons-661794) Calling .Close
	I0829 18:06:42.259970   20885 main.go:141] libmachine: (addons-661794) DBG | Closing plugin on server side
	I0829 18:06:42.260007   20885 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:06:42.260018   20885 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:06:42.261927   20885 addons.go:475] Verifying addon gcp-auth=true in "addons-661794"
	I0829 18:06:42.264038   20885 out.go:177] * Verifying gcp-auth addon...
	I0829 18:06:42.266284   20885 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0829 18:06:42.274943   20885 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0829 18:06:42.373891   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:42.374668   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:42.375090   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:42.501634   20885 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.891253502s)
	I0829 18:06:42.501678   20885 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.850078866s)
	I0829 18:06:42.501699   20885 main.go:141] libmachine: Making call to close driver server
	I0829 18:06:42.501704   20885 system_svc.go:56] duration metric: took 1.850187021s WaitForService to wait for kubelet
	I0829 18:06:42.501717   20885 main.go:141] libmachine: (addons-661794) Calling .Close
	I0829 18:06:42.501719   20885 kubeadm.go:582] duration metric: took 15.265307623s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 18:06:42.501748   20885 node_conditions.go:102] verifying NodePressure condition ...
	I0829 18:06:42.501999   20885 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:06:42.502016   20885 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:06:42.502028   20885 main.go:141] libmachine: Making call to close driver server
	I0829 18:06:42.502040   20885 main.go:141] libmachine: (addons-661794) DBG | Closing plugin on server side
	I0829 18:06:42.502041   20885 main.go:141] libmachine: (addons-661794) Calling .Close
	I0829 18:06:42.502343   20885 main.go:141] libmachine: (addons-661794) DBG | Closing plugin on server side
	I0829 18:06:42.502382   20885 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:06:42.502398   20885 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:06:42.505403   20885 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 18:06:42.505425   20885 node_conditions.go:123] node cpu capacity is 2
	I0829 18:06:42.505435   20885 node_conditions.go:105] duration metric: took 3.681718ms to run NodePressure ...
	I0829 18:06:42.505445   20885 start.go:241] waiting for startup goroutines ...
	I0829 18:06:42.798821   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:42.799717   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:42.800748   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:43.295584   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:43.296085   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:43.297422   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:43.794696   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:43.795265   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:43.797975   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:44.294172   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:44.295768   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:44.296718   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:44.795119   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:44.795441   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:44.798042   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:45.293798   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:45.296949   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:45.297331   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:45.796819   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:45.797109   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:45.798240   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:46.295009   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:46.296566   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:46.297705   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:46.795521   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:46.795809   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:46.796960   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:47.294414   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:47.295047   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:47.296107   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:47.928253   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:47.928357   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:47.928887   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:48.296099   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:48.297118   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:48.298174   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:48.802148   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:48.803951   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:48.806174   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:49.296171   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:49.297087   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:49.299787   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:49.794283   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:49.794435   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:49.796890   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:50.294939   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:50.296795   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:50.297781   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:50.794491   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:50.795034   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:50.796884   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:51.572518   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:51.573851   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:51.574033   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:51.793903   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:51.796331   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:51.796647   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:52.294460   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:52.295421   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:52.296866   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:52.795183   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:52.795737   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:52.798017   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:53.294685   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:53.295984   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:53.297441   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:53.794748   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:53.794837   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:53.797243   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:54.294037   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:54.297215   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:54.297779   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:54.794465   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:54.795198   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:54.797818   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:55.294058   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:55.295606   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:55.297904   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:55.794367   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:55.795741   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:55.797357   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:56.294662   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:56.294791   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:56.296870   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:57.003476   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:57.009240   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:57.009881   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:57.371591   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:57.372160   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:57.372479   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:57.794248   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:57.796780   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:57.796823   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:58.294661   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:58.295406   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:58.297422   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:58.794662   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:58.795294   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:58.797534   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:59.294650   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:59.295771   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:59.297733   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:59.795077   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:59.795133   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:59.798065   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:00.294579   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:00.295333   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:00.296814   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:00.824978   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:00.825502   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:00.826426   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:01.299747   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:01.300354   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:01.302659   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:01.794479   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:01.795271   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:01.796448   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:02.295438   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:02.296375   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:02.298441   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:02.794883   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:02.795531   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:02.797526   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:03.296311   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:03.296401   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:03.301371   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:03.794317   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:03.795764   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:03.797684   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:04.294922   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:04.294978   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:04.297318   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:04.794248   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:04.796080   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:04.797117   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:05.294065   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:05.295120   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:05.297251   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:05.794199   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:05.796894   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:05.797482   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:06.295951   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:06.296421   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:06.298040   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:06.794599   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:06.795772   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:06.797869   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:07.295224   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:07.295966   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:07.296925   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:07.795639   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:07.795856   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:07.798623   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:08.452394   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:08.452449   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:08.453293   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:08.793588   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:08.796399   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:08.796764   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:09.374173   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:09.374768   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:09.375496   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:09.794840   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:09.795073   20885 kapi.go:107] duration metric: took 29.504177003s to wait for kubernetes.io/minikube-addons=registry ...
	I0829 18:07:09.797775   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:10.294618   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:10.297198   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:10.794210   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:10.796681   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:11.294527   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:11.297209   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:11.874018   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:11.874369   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:12.294415   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:12.296718   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:12.795142   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:12.796775   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:13.371458   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:13.371564   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:13.794438   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:13.796775   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:14.294483   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:14.298342   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:14.795185   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:14.797800   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:15.294793   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:15.296641   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:15.794585   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:15.797637   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:16.294429   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:16.297276   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:16.794653   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:16.796176   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:17.294471   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:17.296642   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:17.795586   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:17.797856   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:18.294867   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:18.297350   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:18.873787   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:18.874499   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:19.295272   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:19.297675   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:19.793686   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:19.796567   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:20.295162   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:20.297639   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:20.794689   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:20.797425   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:21.293955   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:21.296693   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:21.796076   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:21.798571   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:22.294242   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:22.297610   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:22.793836   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:22.795866   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:23.294904   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:23.297234   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:23.873468   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:23.873474   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:24.294353   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:24.297715   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:24.794664   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:24.797121   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:25.307356   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:25.307621   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:25.794360   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:25.796628   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:26.295101   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:26.297592   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:26.797765   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:26.798534   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:27.295630   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:27.297443   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:27.893362   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:27.893736   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:28.297554   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:28.300306   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:28.801305   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:28.801363   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:29.296442   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:29.299161   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:29.879724   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:29.879775   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:30.303754   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:30.395118   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:30.794058   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:30.796850   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:31.379028   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:31.383018   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:31.794377   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:31.796844   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:32.295229   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:32.298272   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:32.795296   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:32.799209   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:33.294523   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:33.296665   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:33.794170   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:33.796230   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:34.294075   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:34.296619   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:34.794732   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:34.797360   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:35.293945   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:35.376839   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:35.794716   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:35.798411   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:36.295069   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:36.296598   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:36.794944   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:36.797091   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:37.294103   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:37.296815   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:37.794190   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:37.796115   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:38.294436   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:38.298664   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:38.795852   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:38.797070   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:39.297841   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:39.302072   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:39.794066   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:39.796733   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:40.294886   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:40.296258   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:40.793768   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:40.796568   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:41.295773   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:41.299166   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:41.794638   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:41.801156   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:42.373182   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:42.373296   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:42.796021   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:42.800406   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:43.294817   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:43.297751   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:43.796244   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:43.798201   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:44.294794   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:44.297218   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:44.796597   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:44.797505   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:45.377178   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:45.377476   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:45.797424   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:45.801659   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:46.294886   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:46.300271   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:46.801084   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:46.806199   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:47.293954   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:47.296689   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:47.794496   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:47.797567   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:48.294365   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:48.296790   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:48.794389   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:48.796333   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:49.294760   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:49.296967   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:49.794985   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:49.796798   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:50.294017   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:50.296424   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:50.793929   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:50.796172   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:51.319966   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:51.321682   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:51.794157   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:51.797597   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:52.294667   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:52.297260   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:52.794580   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:52.796919   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:53.381361   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:53.381362   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:53.794363   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:53.803789   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:54.294676   20885 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:54.297462   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:54.795631   20885 kapi.go:107] duration metric: took 1m14.505563034s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0829 18:07:54.796897   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:55.312707   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:55.797822   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:56.296887   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:56.798639   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:57.299109   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:57.873192   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:58.296888   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:58.796586   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:59.392696   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:59.796459   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:00.296803   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:00.796659   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:01.372106   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:01.797384   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:02.296990   20885 kapi.go:107] duration metric: took 1m22.004552568s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0829 18:08:04.271375   20885 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0829 18:08:04.271398   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:04.771374   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:05.270430   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:05.770134   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:06.271189   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:06.769970   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:07.270795   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:07.770810   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:08.270555   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:08.769754   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:09.270190   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:09.770062   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:10.270736   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:10.770275   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:11.270504   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:11.769963   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:12.271118   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:12.769911   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:13.270037   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:13.770808   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:14.270250   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:14.769886   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:15.271052   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:15.770838   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:16.270485   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:16.769915   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:17.271027   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:17.770863   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:18.270332   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:18.770274   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:19.270357   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:19.770505   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:20.269953   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:20.770633   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:21.270133   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:21.770040   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:22.271008   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:22.771193   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:23.269597   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:23.770355   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:24.269857   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:24.770692   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:25.270421   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:25.770027   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:26.270343   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:26.769710   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:27.270331   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:27.770158   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:28.270601   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:28.769512   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:29.269726   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:29.770317   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:30.270307   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:30.769512   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:31.270368   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:31.770180   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:32.270896   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:32.771138   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:33.273741   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:33.771124   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:34.270700   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:34.770016   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:35.270683   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:35.770588   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:36.270335   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:36.769766   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:37.272035   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:37.769568   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:38.270499   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:38.771018   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:39.270371   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:39.769741   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:40.270709   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:40.770363   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:41.269649   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:41.770372   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:42.270130   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:42.770827   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:43.270470   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:43.772108   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:44.270418   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:44.775902   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:45.270734   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:45.772174   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:46.270618   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:46.770411   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:47.270141   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:47.770043   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:48.270649   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:48.769831   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:49.270366   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:49.770012   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:50.270912   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:50.770264   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:51.269850   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:51.771105   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:52.269625   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:52.770248   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:53.269734   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:53.770144   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:54.269420   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:54.769678   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:55.270572   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:55.769829   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:56.270573   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:56.769697   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:57.270565   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:57.770567   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:58.269677   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:58.769636   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:59.269753   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:59.773740   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:00.270538   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:00.769998   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:01.270392   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:01.769838   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:02.270401   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:02.769843   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:03.277005   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:03.771194   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:04.270162   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:04.769421   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:05.270519   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:05.769829   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:06.270168   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:06.769574   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:07.270693   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:07.772656   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:08.270584   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:08.771006   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:09.270353   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:09.769029   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:10.270905   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:10.770437   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:11.270474   20885 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:11.769953   20885 kapi.go:107] duration metric: took 2m29.50366798s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0829 18:09:11.771834   20885 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-661794 cluster.
	I0829 18:09:11.773286   20885 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0829 18:09:11.774501   20885 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0829 18:09:11.775881   20885 out.go:177] * Enabled addons: nvidia-device-plugin, default-storageclass, ingress-dns, storage-provisioner-rancher, cloud-spanner, helm-tiller, storage-provisioner, metrics-server, inspektor-gadget, volcano, yakd, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0829 18:09:11.777952   20885 addons.go:510] duration metric: took 2m44.541502695s for enable addons: enabled=[nvidia-device-plugin default-storageclass ingress-dns storage-provisioner-rancher cloud-spanner helm-tiller storage-provisioner metrics-server inspektor-gadget volcano yakd volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0829 18:09:11.778009   20885 start.go:246] waiting for cluster config update ...
	I0829 18:09:11.778034   20885 start.go:255] writing updated cluster config ...
	I0829 18:09:11.778319   20885 ssh_runner.go:195] Run: rm -f paused
	I0829 18:09:11.829651   20885 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0829 18:09:11.831491   20885 out.go:177] * Done! kubectl is now configured to use "addons-661794" cluster and "default" namespace by default
	
	
	==> Docker <==
	Aug 29 18:19:06 addons-661794 dockerd[1192]: time="2024-08-29T18:19:06.758465477Z" level=warning msg="cleaning up after shim disconnected" id=4f2f2beed19cf2cb468e2d80ac040dc28f827f3bf5b7a2008e5c9d9d7e8eb224 namespace=moby
	Aug 29 18:19:06 addons-661794 dockerd[1192]: time="2024-08-29T18:19:06.758476244Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 29 18:19:06 addons-661794 dockerd[1185]: time="2024-08-29T18:19:06.761809650Z" level=info msg="ignoring event" container=4f2f2beed19cf2cb468e2d80ac040dc28f827f3bf5b7a2008e5c9d9d7e8eb224 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 29 18:19:06 addons-661794 dockerd[1192]: time="2024-08-29T18:19:06.787279133Z" level=info msg="shim disconnected" id=56782909b8013f846ffb80f0c655c74cb7f666b1b3d7406cbf3cb3ed2a0b4e63 namespace=moby
	Aug 29 18:19:06 addons-661794 dockerd[1192]: time="2024-08-29T18:19:06.787733267Z" level=warning msg="cleaning up after shim disconnected" id=56782909b8013f846ffb80f0c655c74cb7f666b1b3d7406cbf3cb3ed2a0b4e63 namespace=moby
	Aug 29 18:19:06 addons-661794 dockerd[1192]: time="2024-08-29T18:19:06.787890685Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 29 18:19:06 addons-661794 dockerd[1185]: time="2024-08-29T18:19:06.787008264Z" level=info msg="ignoring event" container=56782909b8013f846ffb80f0c655c74cb7f666b1b3d7406cbf3cb3ed2a0b4e63 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 29 18:19:06 addons-661794 dockerd[1185]: time="2024-08-29T18:19:06.950592078Z" level=info msg="ignoring event" container=5c7dd064baefa1b9079c9261b419bdeae200901991eb9664274b2bb3789be038 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 29 18:19:06 addons-661794 dockerd[1192]: time="2024-08-29T18:19:06.951554325Z" level=info msg="shim disconnected" id=5c7dd064baefa1b9079c9261b419bdeae200901991eb9664274b2bb3789be038 namespace=moby
	Aug 29 18:19:06 addons-661794 dockerd[1192]: time="2024-08-29T18:19:06.951711354Z" level=warning msg="cleaning up after shim disconnected" id=5c7dd064baefa1b9079c9261b419bdeae200901991eb9664274b2bb3789be038 namespace=moby
	Aug 29 18:19:06 addons-661794 dockerd[1192]: time="2024-08-29T18:19:06.951802650Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 29 18:19:06 addons-661794 dockerd[1192]: time="2024-08-29T18:19:06.985779851Z" level=info msg="shim disconnected" id=e26923ba8519ffaf9f3fda1f78a0f73ed59daea99778505c3c62762b8a40cce5 namespace=moby
	Aug 29 18:19:06 addons-661794 dockerd[1192]: time="2024-08-29T18:19:06.985965260Z" level=warning msg="cleaning up after shim disconnected" id=e26923ba8519ffaf9f3fda1f78a0f73ed59daea99778505c3c62762b8a40cce5 namespace=moby
	Aug 29 18:19:06 addons-661794 dockerd[1192]: time="2024-08-29T18:19:06.986292181Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 29 18:19:06 addons-661794 dockerd[1185]: time="2024-08-29T18:19:06.988084341Z" level=info msg="ignoring event" container=e26923ba8519ffaf9f3fda1f78a0f73ed59daea99778505c3c62762b8a40cce5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 29 18:19:07 addons-661794 dockerd[1185]: time="2024-08-29T18:19:07.165678280Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=f50f1e7e92b9b224a38a54311be8cb68b3922b72998f28d814e6b753b104ba40
	Aug 29 18:19:07 addons-661794 dockerd[1185]: time="2024-08-29T18:19:07.228815580Z" level=info msg="ignoring event" container=f50f1e7e92b9b224a38a54311be8cb68b3922b72998f28d814e6b753b104ba40 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 29 18:19:07 addons-661794 dockerd[1192]: time="2024-08-29T18:19:07.229459557Z" level=info msg="shim disconnected" id=f50f1e7e92b9b224a38a54311be8cb68b3922b72998f28d814e6b753b104ba40 namespace=moby
	Aug 29 18:19:07 addons-661794 dockerd[1192]: time="2024-08-29T18:19:07.229652785Z" level=warning msg="cleaning up after shim disconnected" id=f50f1e7e92b9b224a38a54311be8cb68b3922b72998f28d814e6b753b104ba40 namespace=moby
	Aug 29 18:19:07 addons-661794 dockerd[1192]: time="2024-08-29T18:19:07.229664924Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 29 18:19:07 addons-661794 dockerd[1192]: time="2024-08-29T18:19:07.386289974Z" level=info msg="shim disconnected" id=6b2e2046f1788960a2791d0b05bfa169e08f37f52f34c33b90c622c9701578f1 namespace=moby
	Aug 29 18:19:07 addons-661794 dockerd[1192]: time="2024-08-29T18:19:07.386619770Z" level=warning msg="cleaning up after shim disconnected" id=6b2e2046f1788960a2791d0b05bfa169e08f37f52f34c33b90c622c9701578f1 namespace=moby
	Aug 29 18:19:07 addons-661794 dockerd[1192]: time="2024-08-29T18:19:07.386635801Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 29 18:19:07 addons-661794 dockerd[1185]: time="2024-08-29T18:19:07.388025800Z" level=info msg="ignoring event" container=6b2e2046f1788960a2791d0b05bfa169e08f37f52f34c33b90c622c9701578f1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 29 18:19:07 addons-661794 dockerd[1192]: time="2024-08-29T18:19:07.424280211Z" level=warning msg="cleanup warnings time=\"2024-08-29T18:19:07Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b22cae982e33e       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                                  3 seconds ago       Running             hello-world-app           0                   2a595fae55487       hello-world-app-55bf9c44b4-lvt8s
	6e62ac7ad1b4c       nginx@sha256:c04c18adc2a407740a397c8407c011fc6c90026a9b65cceddef7ae5484360158                                                13 seconds ago      Running             nginx                     0                   0425e7980dccd       nginx
	3be418ba68df7       alpine/helm@sha256:9d9fab00e0680f1328924429925595dfe96a68531c8a9c1518d05ee2ad45c36f                                          38 seconds ago      Exited              helm-test                 0                   e785fd11892c9       helm-test
	472f3f4c60545       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 9 minutes ago       Running             gcp-auth                  0                   51c234b488b10       gcp-auth-89d5ffd79-ggshh
	f50f1e7e92b9b       registry.k8s.io/ingress-nginx/controller@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce             11 minutes ago      Unknown             controller                0                   6b2e2046f1788       ingress-nginx-controller-bc57996ff-mgf8c
	d5d77d0d2cd2c       ce263a8653f9c                                                                                                                11 minutes ago      Exited              patch                     1                   b182e62a20683       ingress-nginx-admission-patch-mdzjk
	c7816e557766a       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              create                    0                   4596883994c8a       ingress-nginx-admission-create-ggj9t
	56782909b8013       gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367              11 minutes ago      Exited              registry-proxy            0                   e26923ba8519f       registry-proxy-2d74h
	861c68ac9c1a0       6e38f40d628db                                                                                                                12 minutes ago      Running             storage-provisioner       0                   3a8bee20db011       storage-provisioner
	26d7afcbc6102       cbb01a7bd410d                                                                                                                12 minutes ago      Running             coredns                   0                   167ce96ab63b2       coredns-6f6b679f8f-ktqqc
	ac1b19d15852f       ad83b2ca7b09e                                                                                                                12 minutes ago      Running             kube-proxy                0                   eeeb0ad5df7ea       kube-proxy-p6jxm
	5712be4d9fab9       2e96e5913fc06                                                                                                                12 minutes ago      Running             etcd                      0                   cbad4dc8009bd       etcd-addons-661794
	4cbd39d8b3b84       1766f54c897f0                                                                                                                12 minutes ago      Running             kube-scheduler            0                   e716729111825       kube-scheduler-addons-661794
	f618d55981e22       604f5db92eaa8                                                                                                                12 minutes ago      Running             kube-apiserver            0                   2b5008411cfdc       kube-apiserver-addons-661794
	b018401bb5c0e       045733566833c                                                                                                                12 minutes ago      Running             kube-controller-manager   0                   cff065e553ede       kube-controller-manager-addons-661794
	
	
	==> controller_ingress [f50f1e7e92b9] <==
	command /bin/bash -c "docker logs --tail 25 f50f1e7e92b9" failed with error: /bin/bash -c "docker logs --tail 25 f50f1e7e92b9": Process exited with status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: f50f1e7e92b9
	
	
	==> coredns [26d7afcbc610] <==
	[INFO] 10.244.0.22:39600 - 31220 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000137654s
	[INFO] 10.244.0.22:39600 - 50829 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000124617s
	[INFO] 10.244.0.22:39600 - 3286 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000088374s
	[INFO] 10.244.0.22:39600 - 19342 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000185245s
	[INFO] 10.244.0.22:48253 - 27936 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000097598s
	[INFO] 10.244.0.22:48253 - 60311 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000101184s
	[INFO] 10.244.0.22:48253 - 8027 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000127686s
	[INFO] 10.244.0.22:48253 - 34963 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000101085s
	[INFO] 10.244.0.22:48253 - 47927 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000055524s
	[INFO] 10.244.0.22:48253 - 11714 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000110946s
	[INFO] 10.244.0.22:48253 - 36507 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00006857s
	[INFO] 10.244.0.22:46785 - 41579 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000114658s
	[INFO] 10.244.0.22:46785 - 27737 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000064636s
	[INFO] 10.244.0.22:33989 - 51736 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000029891s
	[INFO] 10.244.0.22:46785 - 42081 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000071516s
	[INFO] 10.244.0.22:33989 - 17605 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000022842s
	[INFO] 10.244.0.22:46785 - 38642 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000086897s
	[INFO] 10.244.0.22:33989 - 26659 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000093936s
	[INFO] 10.244.0.22:33989 - 64950 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000040755s
	[INFO] 10.244.0.22:46785 - 10790 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000028397s
	[INFO] 10.244.0.22:46785 - 49118 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000086355s
	[INFO] 10.244.0.22:33989 - 45273 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000081886s
	[INFO] 10.244.0.22:33989 - 58221 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000065284s
	[INFO] 10.244.0.22:46785 - 36987 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000051952s
	[INFO] 10.244.0.22:33989 - 10500 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000053632s
	
	
	==> describe nodes <==
	Name:               addons-661794
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-661794
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=95341f0b655cea8be5ebfc6bf112c8367dc08d33
	                    minikube.k8s.io/name=addons-661794
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_29T18_06_23_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-661794
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Aug 2024 18:06:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-661794
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Aug 2024 18:19:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Aug 2024 18:18:55 +0000   Thu, 29 Aug 2024 18:06:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Aug 2024 18:18:55 +0000   Thu, 29 Aug 2024 18:06:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Aug 2024 18:18:55 +0000   Thu, 29 Aug 2024 18:06:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Aug 2024 18:18:55 +0000   Thu, 29 Aug 2024 18:06:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.206
	  Hostname:    addons-661794
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 26e322e0a652458b948dc7a6b261ca49
	  System UUID:                26e322e0-a652-458b-948d-c7a6b261ca49
	  Boot ID:                    2b819a57-3be0-4a58-9db9-2fede0b5c07d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m14s
	  default                     hello-world-app-55bf9c44b4-lvt8s         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         17s
	  gcp-auth                    gcp-auth-89d5ffd79-ggshh                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-6f6b679f8f-ktqqc                 100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 etcd-addons-661794                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-661794             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-661794    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-p6jxm                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-661794             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m (x2 over 12m)  kubelet          Node addons-661794 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x2 over 12m)  kubelet          Node addons-661794 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x2 over 12m)  kubelet          Node addons-661794 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                12m                kubelet          Node addons-661794 status is now: NodeReady
	  Normal  RegisteredNode           12m                node-controller  Node addons-661794 event: Registered Node addons-661794 in Controller
	
	
	==> dmesg <==
	[  +6.116031] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.928781] kauditd_printk_skb: 47 callbacks suppressed
	[  +6.362250] kauditd_printk_skb: 30 callbacks suppressed
	[  +5.084373] kauditd_printk_skb: 39 callbacks suppressed
	[  +5.024711] kauditd_printk_skb: 44 callbacks suppressed
	[ +11.675646] kauditd_printk_skb: 23 callbacks suppressed
	[Aug29 18:08] kauditd_printk_skb: 32 callbacks suppressed
	[  +9.124759] kauditd_printk_skb: 28 callbacks suppressed
	[Aug29 18:09] kauditd_printk_skb: 40 callbacks suppressed
	[ +10.306744] kauditd_printk_skb: 9 callbacks suppressed
	[ +11.474938] kauditd_printk_skb: 28 callbacks suppressed
	[  +6.858402] kauditd_printk_skb: 2 callbacks suppressed
	[ +17.379152] kauditd_printk_skb: 22 callbacks suppressed
	[Aug29 18:10] kauditd_printk_skb: 2 callbacks suppressed
	[Aug29 18:13] kauditd_printk_skb: 28 callbacks suppressed
	[Aug29 18:17] kauditd_printk_skb: 28 callbacks suppressed
	[Aug29 18:18] kauditd_printk_skb: 37 callbacks suppressed
	[  +5.080991] kauditd_printk_skb: 36 callbacks suppressed
	[ +13.112428] kauditd_printk_skb: 35 callbacks suppressed
	[  +5.509018] kauditd_printk_skb: 15 callbacks suppressed
	[  +6.792835] kauditd_printk_skb: 89 callbacks suppressed
	[  +5.120509] kauditd_printk_skb: 52 callbacks suppressed
	[  +5.622576] kauditd_printk_skb: 27 callbacks suppressed
	[ +11.378466] kauditd_printk_skb: 19 callbacks suppressed
	[Aug29 18:19] kauditd_printk_skb: 48 callbacks suppressed
	
	
	==> etcd [5712be4d9fab] <==
	{"level":"warn","ts":"2024-08-29T18:07:18.711580Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"126.63455ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-29T18:07:18.711600Z","caller":"traceutil/trace.go:171","msg":"trace[969279542] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1055; }","duration":"126.658207ms","start":"2024-08-29T18:07:18.584935Z","end":"2024-08-29T18:07:18.711594Z","steps":["trace[969279542] 'agreement among raft nodes before linearized reading'  (duration: 126.618591ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T18:07:27.872349Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"119.939693ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-29T18:07:27.872566Z","caller":"traceutil/trace.go:171","msg":"trace[1970678939] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1083; }","duration":"120.155441ms","start":"2024-08-29T18:07:27.752380Z","end":"2024-08-29T18:07:27.872535Z","steps":["trace[1970678939] 'range keys from in-memory index tree'  (duration: 119.794003ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-29T18:07:51.293539Z","caller":"traceutil/trace.go:171","msg":"trace[821436102] transaction","detail":"{read_only:false; response_revision:1219; number_of_response:1; }","duration":"317.887447ms","start":"2024-08-29T18:07:50.975577Z","end":"2024-08-29T18:07:51.293465Z","steps":["trace[821436102] 'process raft request'  (duration: 317.702787ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T18:07:51.295963Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-29T18:07:50.975563Z","time spent":"318.799631ms","remote":"127.0.0.1:49870","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1213 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-08-29T18:07:53.359152Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"285.801902ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/podtemplates/\" range_end:\"/registry/podtemplates0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-29T18:07:53.359217Z","caller":"traceutil/trace.go:171","msg":"trace[524480857] range","detail":"{range_begin:/registry/podtemplates/; range_end:/registry/podtemplates0; response_count:0; response_revision:1223; }","duration":"285.884261ms","start":"2024-08-29T18:07:53.073322Z","end":"2024-08-29T18:07:53.359206Z","steps":["trace[524480857] 'count revisions from in-memory index tree'  (duration: 285.750836ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T18:07:53.359576Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.227483ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-29T18:07:53.359599Z","caller":"traceutil/trace.go:171","msg":"trace[2044126904] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1223; }","duration":"108.255236ms","start":"2024-08-29T18:07:53.251338Z","end":"2024-08-29T18:07:53.359593Z","steps":["trace[2044126904] 'range keys from in-memory index tree'  (duration: 108.133664ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-29T18:07:59.368716Z","caller":"traceutil/trace.go:171","msg":"trace[831065320] linearizableReadLoop","detail":"{readStateIndex:1312; appliedIndex:1311; }","duration":"117.4796ms","start":"2024-08-29T18:07:59.251215Z","end":"2024-08-29T18:07:59.368694Z","steps":["trace[831065320] 'read index received'  (duration: 115.016115ms)","trace[831065320] 'applied index is now lower than readState.Index'  (duration: 2.462863ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-29T18:07:59.368824Z","caller":"traceutil/trace.go:171","msg":"trace[1232061060] transaction","detail":"{read_only:false; response_revision:1272; number_of_response:1; }","duration":"130.969646ms","start":"2024-08-29T18:07:59.237848Z","end":"2024-08-29T18:07:59.368818Z","steps":["trace[1232061060] 'process raft request'  (duration: 128.452089ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T18:07:59.369213Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"117.97636ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-29T18:07:59.369258Z","caller":"traceutil/trace.go:171","msg":"trace[830669325] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1272; }","duration":"118.039275ms","start":"2024-08-29T18:07:59.251210Z","end":"2024-08-29T18:07:59.369249Z","steps":["trace[830669325] 'agreement among raft nodes before linearized reading'  (duration: 117.930244ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-29T18:09:35.502523Z","caller":"traceutil/trace.go:171","msg":"trace[625261093] linearizableReadLoop","detail":"{readStateIndex:1615; appliedIndex:1614; }","duration":"320.174968ms","start":"2024-08-29T18:09:35.182330Z","end":"2024-08-29T18:09:35.502505Z","steps":["trace[625261093] 'read index received'  (duration: 319.999323ms)","trace[625261093] 'applied index is now lower than readState.Index'  (duration: 175.312µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-29T18:09:35.502667Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"320.300244ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-29T18:09:35.502691Z","caller":"traceutil/trace.go:171","msg":"trace[119361313] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1552; }","duration":"320.369858ms","start":"2024-08-29T18:09:35.182315Z","end":"2024-08-29T18:09:35.502685Z","steps":["trace[119361313] 'agreement among raft nodes before linearized reading'  (duration: 320.264136ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T18:09:35.502749Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-29T18:09:35.182281Z","time spent":"320.4252ms","remote":"127.0.0.1:49882","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2024-08-29T18:09:35.502818Z","caller":"traceutil/trace.go:171","msg":"trace[1990385575] transaction","detail":"{read_only:false; response_revision:1552; number_of_response:1; }","duration":"350.203418ms","start":"2024-08-29T18:09:35.152600Z","end":"2024-08-29T18:09:35.502803Z","steps":["trace[1990385575] 'process raft request'  (duration: 349.763279ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T18:09:35.502940Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-29T18:09:35.152585Z","time spent":"350.281269ms","remote":"127.0.0.1:49936","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":483,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/snapshot-controller-leader\" mod_revision:1544 > success:<request_put:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" value_size:420 >> failure:<request_range:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" > >"}
	{"level":"info","ts":"2024-08-29T18:16:17.919162Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1864}
	{"level":"info","ts":"2024-08-29T18:16:18.034572Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1864,"took":"112.731396ms","hash":1952578789,"current-db-size-bytes":9244672,"current-db-size":"9.2 MB","current-db-size-in-use-bytes":4972544,"current-db-size-in-use":"5.0 MB"}
	{"level":"info","ts":"2024-08-29T18:16:18.034635Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1952578789,"revision":1864,"compact-revision":-1}
	{"level":"warn","ts":"2024-08-29T18:18:13.320899Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.779322ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8855644355740305898 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/secrets/yakd-dashboard/gcp-auth\" mod_revision:1478 > success:<request_delete_range:<key:\"/registry/secrets/yakd-dashboard/gcp-auth\" > > failure:<request_range:<key:\"/registry/secrets/yakd-dashboard/gcp-auth\" > >>","response":"size:18"}
	{"level":"info","ts":"2024-08-29T18:18:13.321106Z","caller":"traceutil/trace.go:171","msg":"trace[2038925810] transaction","detail":"{read_only:false; number_of_response:1; response_revision:2620; }","duration":"151.012467ms","start":"2024-08-29T18:18:13.170066Z","end":"2024-08-29T18:18:13.321079Z","steps":["trace[2038925810] 'process raft request'  (duration: 47.217795ms)","trace[2038925810] 'compare'  (duration: 101.470051ms)"],"step_count":2}
	
	
	==> gcp-auth [472f3f4c6054] <==
	2024/08/29 18:17:56 Ready to write response ...
	2024/08/29 18:17:56 Ready to marshal response ...
	2024/08/29 18:17:56 Ready to write response ...
	2024/08/29 18:17:58 Ready to marshal response ...
	2024/08/29 18:17:58 Ready to write response ...
	2024/08/29 18:18:06 Ready to marshal response ...
	2024/08/29 18:18:06 Ready to write response ...
	2024/08/29 18:18:06 Ready to marshal response ...
	2024/08/29 18:18:06 Ready to write response ...
	2024/08/29 18:18:19 Ready to marshal response ...
	2024/08/29 18:18:19 Ready to write response ...
	2024/08/29 18:18:26 Ready to marshal response ...
	2024/08/29 18:18:26 Ready to write response ...
	2024/08/29 18:18:33 Ready to marshal response ...
	2024/08/29 18:18:33 Ready to write response ...
	2024/08/29 18:18:36 Ready to marshal response ...
	2024/08/29 18:18:36 Ready to write response ...
	2024/08/29 18:18:37 Ready to marshal response ...
	2024/08/29 18:18:37 Ready to write response ...
	2024/08/29 18:18:37 Ready to marshal response ...
	2024/08/29 18:18:37 Ready to write response ...
	2024/08/29 18:18:50 Ready to marshal response ...
	2024/08/29 18:18:50 Ready to write response ...
	2024/08/29 18:19:01 Ready to marshal response ...
	2024/08/29 18:19:01 Ready to write response ...
	
	
	==> kernel <==
	 18:19:08 up 13 min,  0 users,  load average: 0.82, 0.68, 0.58
	Linux addons-661794 5.10.207 #1 SMP Tue Aug 27 20:49:29 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [f618d55981e2] <==
	W0829 18:09:46.354181       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0829 18:09:46.479692       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0829 18:18:05.381111       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0829 18:18:22.874559       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0829 18:18:30.199273       1 conn.go:339] Error on socket receive: read tcp 192.168.39.206:8443->192.168.39.1:45006: use of closed network connection
	I0829 18:18:35.585578       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0829 18:18:35.588763       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0829 18:18:35.624068       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0829 18:18:35.632832       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0829 18:18:35.744671       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0829 18:18:35.744813       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0829 18:18:35.777978       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0829 18:18:35.778029       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0829 18:18:35.812042       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0829 18:18:35.812136       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0829 18:18:36.337144       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	W0829 18:18:36.780018       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0829 18:18:36.812829       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0829 18:18:36.894867       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0829 18:18:36.987455       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.96.75.204"}
	I0829 18:18:48.150743       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0829 18:18:49.270665       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0829 18:18:50.416942       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0829 18:18:50.617658       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.105.215.248"}
	I0829 18:19:02.119648       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.107.238.159"}
	
	
	==> kube-controller-manager [b018401bb5c0] <==
	E0829 18:18:53.390467       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0829 18:18:55.097903       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="local-path-storage"
	I0829 18:18:55.840994       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-661794"
	W0829 18:18:55.979017       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 18:18:55.979076       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0829 18:18:57.144666       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0829 18:18:57.144714       1 shared_informer.go:320] Caches are synced for garbage collector
	I0829 18:18:57.181295       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0829 18:18:57.181666       1 shared_informer.go:320] Caches are synced for resource quota
	W0829 18:18:57.991728       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 18:18:57.991775       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0829 18:18:58.438062       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gadget"
	I0829 18:18:59.617026       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	W0829 18:19:01.901526       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 18:19:01.901743       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0829 18:19:01.955534       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="28.478674ms"
	I0829 18:19:01.971118       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="15.16463ms"
	I0829 18:19:01.971203       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="38.097µs"
	I0829 18:19:01.986988       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="42.855µs"
	I0829 18:19:04.107425       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0829 18:19:04.112315       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="4.925µs"
	I0829 18:19:04.119351       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	I0829 18:19:04.599785       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="17.175505ms"
	I0829 18:19:04.600060       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="141.465µs"
	I0829 18:19:06.661292       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-6fb4cdfc84" duration="3.086µs"
	
	
	==> kube-proxy [ac1b19d15852] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0829 18:06:28.209320       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0829 18:06:28.224939       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.206"]
	E0829 18:06:28.225012       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0829 18:06:28.292638       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0829 18:06:28.292684       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0829 18:06:28.292711       1 server_linux.go:169] "Using iptables Proxier"
	I0829 18:06:28.295135       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0829 18:06:28.295380       1 server.go:483] "Version info" version="v1.31.0"
	I0829 18:06:28.297493       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0829 18:06:28.298761       1 config.go:197] "Starting service config controller"
	I0829 18:06:28.298802       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0829 18:06:28.298825       1 config.go:104] "Starting endpoint slice config controller"
	I0829 18:06:28.298841       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0829 18:06:28.303714       1 config.go:326] "Starting node config controller"
	I0829 18:06:28.303739       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0829 18:06:28.399347       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0829 18:06:28.399428       1 shared_informer.go:320] Caches are synced for service config
	I0829 18:06:28.403910       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [4cbd39d8b3b8] <==
	E0829 18:06:19.539460       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:19.533515       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0829 18:06:19.539612       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0829 18:06:19.539691       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:20.345351       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0829 18:06:20.345622       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:20.359588       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0829 18:06:20.359806       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:20.422621       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0829 18:06:20.422667       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:20.473313       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0829 18:06:20.473576       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:20.506795       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0829 18:06:20.507027       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:20.512649       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0829 18:06:20.512837       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:20.636214       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0829 18:06:20.636332       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:20.724364       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0829 18:06:20.724599       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:20.802001       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0829 18:06:20.802050       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:21.081519       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0829 18:06:21.081758       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0829 18:06:23.016480       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 29 18:19:06 addons-661794 kubelet[1962]: I0829 18:19:06.501097    1962 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-tpbjq\" (UniqueName: \"kubernetes.io/projected/ebe832ca-b1ce-41e7-b4f3-23498b02778a-kube-api-access-tpbjq\") on node \"addons-661794\" DevicePath \"\""
	Aug 29 18:19:07 addons-661794 kubelet[1962]: I0829 18:19:07.104845    1962 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfwhx\" (UniqueName: \"kubernetes.io/projected/0872ecae-1d14-4b03-b5a7-3bed9bab8b7a-kube-api-access-sfwhx\") pod \"0872ecae-1d14-4b03-b5a7-3bed9bab8b7a\" (UID: \"0872ecae-1d14-4b03-b5a7-3bed9bab8b7a\") "
	Aug 29 18:19:07 addons-661794 kubelet[1962]: I0829 18:19:07.104916    1962 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8w96w\" (UniqueName: \"kubernetes.io/projected/3de6ff17-cf9f-4375-8344-461862b48005-kube-api-access-8w96w\") pod \"3de6ff17-cf9f-4375-8344-461862b48005\" (UID: \"3de6ff17-cf9f-4375-8344-461862b48005\") "
	Aug 29 18:19:07 addons-661794 kubelet[1962]: I0829 18:19:07.107356    1962 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0872ecae-1d14-4b03-b5a7-3bed9bab8b7a-kube-api-access-sfwhx" (OuterVolumeSpecName: "kube-api-access-sfwhx") pod "0872ecae-1d14-4b03-b5a7-3bed9bab8b7a" (UID: "0872ecae-1d14-4b03-b5a7-3bed9bab8b7a"). InnerVolumeSpecName "kube-api-access-sfwhx". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 29 18:19:07 addons-661794 kubelet[1962]: I0829 18:19:07.107831    1962 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3de6ff17-cf9f-4375-8344-461862b48005-kube-api-access-8w96w" (OuterVolumeSpecName: "kube-api-access-8w96w") pod "3de6ff17-cf9f-4375-8344-461862b48005" (UID: "3de6ff17-cf9f-4375-8344-461862b48005"). InnerVolumeSpecName "kube-api-access-8w96w". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 29 18:19:07 addons-661794 kubelet[1962]: I0829 18:19:07.206173    1962 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-sfwhx\" (UniqueName: \"kubernetes.io/projected/0872ecae-1d14-4b03-b5a7-3bed9bab8b7a-kube-api-access-sfwhx\") on node \"addons-661794\" DevicePath \"\""
	Aug 29 18:19:07 addons-661794 kubelet[1962]: I0829 18:19:07.206253    1962 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-8w96w\" (UniqueName: \"kubernetes.io/projected/3de6ff17-cf9f-4375-8344-461862b48005-kube-api-access-8w96w\") on node \"addons-661794\" DevicePath \"\""
	Aug 29 18:19:07 addons-661794 kubelet[1962]: I0829 18:19:07.508576    1962 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7e12a284-0eea-45e4-8456-e6f53fb30525-webhook-cert\") pod \"7e12a284-0eea-45e4-8456-e6f53fb30525\" (UID: \"7e12a284-0eea-45e4-8456-e6f53fb30525\") "
	Aug 29 18:19:07 addons-661794 kubelet[1962]: I0829 18:19:07.508617    1962 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lszqb\" (UniqueName: \"kubernetes.io/projected/7e12a284-0eea-45e4-8456-e6f53fb30525-kube-api-access-lszqb\") pod \"7e12a284-0eea-45e4-8456-e6f53fb30525\" (UID: \"7e12a284-0eea-45e4-8456-e6f53fb30525\") "
	Aug 29 18:19:07 addons-661794 kubelet[1962]: I0829 18:19:07.510784    1962 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e12a284-0eea-45e4-8456-e6f53fb30525-kube-api-access-lszqb" (OuterVolumeSpecName: "kube-api-access-lszqb") pod "7e12a284-0eea-45e4-8456-e6f53fb30525" (UID: "7e12a284-0eea-45e4-8456-e6f53fb30525"). InnerVolumeSpecName "kube-api-access-lszqb". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 29 18:19:07 addons-661794 kubelet[1962]: I0829 18:19:07.516090    1962 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e12a284-0eea-45e4-8456-e6f53fb30525-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "7e12a284-0eea-45e4-8456-e6f53fb30525" (UID: "7e12a284-0eea-45e4-8456-e6f53fb30525"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 29 18:19:07 addons-661794 kubelet[1962]: I0829 18:19:07.608769    1962 reconciler_common.go:288] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7e12a284-0eea-45e4-8456-e6f53fb30525-webhook-cert\") on node \"addons-661794\" DevicePath \"\""
	Aug 29 18:19:07 addons-661794 kubelet[1962]: I0829 18:19:07.608809    1962 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-lszqb\" (UniqueName: \"kubernetes.io/projected/7e12a284-0eea-45e4-8456-e6f53fb30525-kube-api-access-lszqb\") on node \"addons-661794\" DevicePath \"\""
	Aug 29 18:19:07 addons-661794 kubelet[1962]: I0829 18:19:07.630174    1962 scope.go:117] "RemoveContainer" containerID="4f2f2beed19cf2cb468e2d80ac040dc28f827f3bf5b7a2008e5c9d9d7e8eb224"
	Aug 29 18:19:07 addons-661794 kubelet[1962]: I0829 18:19:07.696140    1962 scope.go:117] "RemoveContainer" containerID="4f2f2beed19cf2cb468e2d80ac040dc28f827f3bf5b7a2008e5c9d9d7e8eb224"
	Aug 29 18:19:07 addons-661794 kubelet[1962]: E0829 18:19:07.700266    1962 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 4f2f2beed19cf2cb468e2d80ac040dc28f827f3bf5b7a2008e5c9d9d7e8eb224" containerID="4f2f2beed19cf2cb468e2d80ac040dc28f827f3bf5b7a2008e5c9d9d7e8eb224"
	Aug 29 18:19:07 addons-661794 kubelet[1962]: I0829 18:19:07.700477    1962 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"4f2f2beed19cf2cb468e2d80ac040dc28f827f3bf5b7a2008e5c9d9d7e8eb224"} err="failed to get container status \"4f2f2beed19cf2cb468e2d80ac040dc28f827f3bf5b7a2008e5c9d9d7e8eb224\": rpc error: code = Unknown desc = Error response from daemon: No such container: 4f2f2beed19cf2cb468e2d80ac040dc28f827f3bf5b7a2008e5c9d9d7e8eb224"
	Aug 29 18:19:07 addons-661794 kubelet[1962]: I0829 18:19:07.700569    1962 scope.go:117] "RemoveContainer" containerID="f50f1e7e92b9b224a38a54311be8cb68b3922b72998f28d814e6b753b104ba40"
	Aug 29 18:19:07 addons-661794 kubelet[1962]: I0829 18:19:07.735362    1962 scope.go:117] "RemoveContainer" containerID="f50f1e7e92b9b224a38a54311be8cb68b3922b72998f28d814e6b753b104ba40"
	Aug 29 18:19:07 addons-661794 kubelet[1962]: E0829 18:19:07.736571    1962 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: f50f1e7e92b9b224a38a54311be8cb68b3922b72998f28d814e6b753b104ba40" containerID="f50f1e7e92b9b224a38a54311be8cb68b3922b72998f28d814e6b753b104ba40"
	Aug 29 18:19:07 addons-661794 kubelet[1962]: I0829 18:19:07.736600    1962 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"f50f1e7e92b9b224a38a54311be8cb68b3922b72998f28d814e6b753b104ba40"} err="failed to get container status \"f50f1e7e92b9b224a38a54311be8cb68b3922b72998f28d814e6b753b104ba40\": rpc error: code = Unknown desc = Error response from daemon: No such container: f50f1e7e92b9b224a38a54311be8cb68b3922b72998f28d814e6b753b104ba40"
	Aug 29 18:19:07 addons-661794 kubelet[1962]: I0829 18:19:07.736628    1962 scope.go:117] "RemoveContainer" containerID="56782909b8013f846ffb80f0c655c74cb7f666b1b3d7406cbf3cb3ed2a0b4e63"
	Aug 29 18:19:07 addons-661794 kubelet[1962]: I0829 18:19:07.767180    1962 scope.go:117] "RemoveContainer" containerID="56782909b8013f846ffb80f0c655c74cb7f666b1b3d7406cbf3cb3ed2a0b4e63"
	Aug 29 18:19:07 addons-661794 kubelet[1962]: E0829 18:19:07.768517    1962 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 56782909b8013f846ffb80f0c655c74cb7f666b1b3d7406cbf3cb3ed2a0b4e63" containerID="56782909b8013f846ffb80f0c655c74cb7f666b1b3d7406cbf3cb3ed2a0b4e63"
	Aug 29 18:19:07 addons-661794 kubelet[1962]: I0829 18:19:07.768556    1962 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"56782909b8013f846ffb80f0c655c74cb7f666b1b3d7406cbf3cb3ed2a0b4e63"} err="failed to get container status \"56782909b8013f846ffb80f0c655c74cb7f666b1b3d7406cbf3cb3ed2a0b4e63\": rpc error: code = Unknown desc = Error response from daemon: No such container: 56782909b8013f846ffb80f0c655c74cb7f666b1b3d7406cbf3cb3ed2a0b4e63"
	
	
	==> storage-provisioner [861c68ac9c1a] <==
	I0829 18:06:37.588344       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0829 18:06:37.616242       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0829 18:06:37.616332       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0829 18:06:37.657051       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0829 18:06:37.657463       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-661794_6dd4c84f-ac05-476a-8e7b-c0bd34030a42!
	I0829 18:06:37.659431       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"051a5776-c8a3-4ca3-af3f-935d77087201", APIVersion:"v1", ResourceVersion:"725", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-661794_6dd4c84f-ac05-476a-8e7b-c0bd34030a42 became leader
	I0829 18:06:37.758141       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-661794_6dd4c84f-ac05-476a-8e7b-c0bd34030a42!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-661794 -n addons-661794
helpers_test.go:261: (dbg) Run:  kubectl --context addons-661794 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-661794 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-661794 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-661794/192.168.39.206
	Start Time:       Thu, 29 Aug 2024 18:09:53 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.28
	IPs:
	  IP:  10.244.0.28
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wdzpf (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-wdzpf:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m15s                  default-scheduler  Successfully assigned default/busybox to addons-661794
	  Normal   Pulling    7m39s (x4 over 9m14s)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m39s (x4 over 9m14s)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m39s (x4 over 9m14s)  kubelet            Error: ErrImagePull
	  Warning  Failed     7m26s (x6 over 9m14s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m2s (x20 over 9m14s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (72.73s)

                                                
                                    

Test pass (309/341)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 7.67
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.0/json-events 3.7
13 TestDownloadOnly/v1.31.0/preload-exists 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.06
18 TestDownloadOnly/v1.31.0/DeleteAll 0.13
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.6
22 TestOffline 65.09
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 218.3
29 TestAddons/serial/Volcano 41.75
31 TestAddons/serial/GCPAuth/Namespaces 0.11
34 TestAddons/parallel/Ingress 21.07
35 TestAddons/parallel/InspektorGadget 11.86
36 TestAddons/parallel/MetricsServer 6.7
37 TestAddons/parallel/HelmTiller 16.25
39 TestAddons/parallel/CSI 40.26
40 TestAddons/parallel/Headlamp 18.45
41 TestAddons/parallel/CloudSpanner 5.53
42 TestAddons/parallel/LocalPath 54.17
43 TestAddons/parallel/NvidiaDevicePlugin 6.43
44 TestAddons/parallel/Yakd 10.83
45 TestAddons/StoppedEnableDisable 8.55
46 TestCertOptions 68.63
47 TestCertExpiration 347.3
48 TestDockerFlags 103.9
49 TestForceSystemdFlag 82.08
50 TestForceSystemdEnv 114.01
52 TestKVMDriverInstallOrUpdate 3.93
56 TestErrorSpam/setup 50.75
57 TestErrorSpam/start 0.35
58 TestErrorSpam/status 0.72
59 TestErrorSpam/pause 1.23
60 TestErrorSpam/unpause 1.4
61 TestErrorSpam/stop 6.93
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 57.97
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 39.72
68 TestFunctional/serial/KubeContext 0.04
69 TestFunctional/serial/KubectlGetPods 0.08
72 TestFunctional/serial/CacheCmd/cache/add_remote 2.22
73 TestFunctional/serial/CacheCmd/cache/add_local 1.29
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
75 TestFunctional/serial/CacheCmd/cache/list 0.05
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.21
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.12
78 TestFunctional/serial/CacheCmd/cache/delete 0.09
79 TestFunctional/serial/MinikubeKubectlCmd 0.1
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
81 TestFunctional/serial/ExtraConfig 41.53
82 TestFunctional/serial/ComponentHealth 0.07
83 TestFunctional/serial/LogsCmd 1
84 TestFunctional/serial/LogsFileCmd 1.02
85 TestFunctional/serial/InvalidService 4.82
87 TestFunctional/parallel/ConfigCmd 0.34
88 TestFunctional/parallel/DashboardCmd 24.98
89 TestFunctional/parallel/DryRun 0.27
90 TestFunctional/parallel/InternationalLanguage 0.14
91 TestFunctional/parallel/StatusCmd 0.83
95 TestFunctional/parallel/ServiceCmdConnect 9.7
96 TestFunctional/parallel/AddonsCmd 0.12
97 TestFunctional/parallel/PersistentVolumeClaim 40.11
99 TestFunctional/parallel/SSHCmd 0.37
100 TestFunctional/parallel/CpCmd 1.34
101 TestFunctional/parallel/MySQL 31.4
102 TestFunctional/parallel/FileSync 0.21
103 TestFunctional/parallel/CertSync 1.29
107 TestFunctional/parallel/NodeLabels 0.07
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.24
111 TestFunctional/parallel/License 0.25
112 TestFunctional/parallel/ServiceCmd/DeployApp 11.24
113 TestFunctional/parallel/Version/short 0.04
114 TestFunctional/parallel/Version/components 0.67
115 TestFunctional/parallel/DockerEnv/bash 0.84
116 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
117 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
118 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
119 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
120 TestFunctional/parallel/ImageCommands/ImageListTable 0.25
121 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
122 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
123 TestFunctional/parallel/ImageCommands/ImageBuild 3.67
124 TestFunctional/parallel/ImageCommands/Setup 1.72
125 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.24
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.32
127 TestFunctional/parallel/ProfileCmd/profile_list 0.37
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.28
129 TestFunctional/parallel/MountCmd/any-port 22.39
130 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.75
131 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.5
132 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.36
133 TestFunctional/parallel/ImageCommands/ImageRemove 0.41
134 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.79
135 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.45
136 TestFunctional/parallel/ServiceCmd/List 0.84
137 TestFunctional/parallel/ServiceCmd/JSONOutput 0.83
138 TestFunctional/parallel/ServiceCmd/HTTPS 0.29
139 TestFunctional/parallel/ServiceCmd/Format 0.28
140 TestFunctional/parallel/ServiceCmd/URL 0.27
150 TestFunctional/parallel/MountCmd/specific-port 1.44
151 TestFunctional/parallel/MountCmd/VerifyCleanup 1.44
152 TestFunctional/delete_echo-server_images 0.03
153 TestFunctional/delete_my-image_image 0.01
154 TestFunctional/delete_minikube_cached_images 0.02
155 TestGvisorAddon 267.75
158 TestMultiControlPlane/serial/StartCluster 219.66
159 TestMultiControlPlane/serial/DeployApp 5.08
160 TestMultiControlPlane/serial/PingHostFromPods 1.25
161 TestMultiControlPlane/serial/AddWorkerNode 62.42
162 TestMultiControlPlane/serial/NodeLabels 0.07
163 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.52
164 TestMultiControlPlane/serial/CopyFile 12.36
165 TestMultiControlPlane/serial/StopSecondaryNode 13.24
166 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.39
167 TestMultiControlPlane/serial/RestartSecondaryNode 44.95
168 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.51
169 TestMultiControlPlane/serial/RestartClusterKeepsNodes 247.76
170 TestMultiControlPlane/serial/DeleteSecondaryNode 7.16
171 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.36
172 TestMultiControlPlane/serial/StopCluster 38.23
173 TestMultiControlPlane/serial/RestartCluster 124.79
174 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.36
175 TestMultiControlPlane/serial/AddSecondaryNode 79.52
176 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.53
179 TestImageBuild/serial/Setup 46.22
180 TestImageBuild/serial/NormalBuild 2.11
181 TestImageBuild/serial/BuildWithBuildArg 1.13
182 TestImageBuild/serial/BuildWithDockerIgnore 1.03
183 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.91
187 TestJSONOutput/start/Command 59.69
188 TestJSONOutput/start/Audit 0
190 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/pause/Command 0.58
194 TestJSONOutput/pause/Audit 0
196 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/unpause/Command 0.54
200 TestJSONOutput/unpause/Audit 0
202 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
205 TestJSONOutput/stop/Command 12.61
206 TestJSONOutput/stop/Audit 0
208 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
209 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
210 TestErrorJSONOutput 0.19
215 TestMainNoArgs 0.04
216 TestMinikubeProfile 98.79
219 TestMountStart/serial/StartWithMountFirst 27.87
220 TestMountStart/serial/VerifyMountFirst 0.37
221 TestMountStart/serial/StartWithMountSecond 28.23
222 TestMountStart/serial/VerifyMountSecond 0.36
223 TestMountStart/serial/DeleteFirst 0.69
224 TestMountStart/serial/VerifyMountPostDelete 0.36
225 TestMountStart/serial/Stop 2.27
226 TestMountStart/serial/RestartStopped 24.13
227 TestMountStart/serial/VerifyMountPostStop 0.38
230 TestMultiNode/serial/FreshStart2Nodes 127.45
231 TestMultiNode/serial/DeployApp2Nodes 4.72
232 TestMultiNode/serial/PingHostFrom2Pods 0.8
233 TestMultiNode/serial/AddNode 57.32
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.21
236 TestMultiNode/serial/CopyFile 7.03
237 TestMultiNode/serial/StopNode 3.27
238 TestMultiNode/serial/StartAfterStop 41.98
239 TestMultiNode/serial/RestartKeepsNodes 190.69
240 TestMultiNode/serial/DeleteNode 2.25
241 TestMultiNode/serial/StopMultiNode 25.04
242 TestMultiNode/serial/RestartMultiNode 114.35
243 TestMultiNode/serial/ValidateNameConflict 50.95
248 TestPreload 185.48
250 TestScheduledStopUnix 120.27
251 TestSkaffold 126.82
254 TestRunningBinaryUpgrade 148.73
256 TestKubernetesUpgrade 147.72
269 TestStoppedBinaryUpgrade/Setup 0.51
270 TestStoppedBinaryUpgrade/Upgrade 180.72
272 TestPause/serial/Start 100.22
281 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
282 TestNoKubernetes/serial/StartWithK8s 89.3
283 TestNetworkPlugins/group/auto/Start 135.74
284 TestNoKubernetes/serial/StartWithStopK8s 27.01
285 TestPause/serial/SecondStartNoReconfiguration 51.83
286 TestStoppedBinaryUpgrade/MinikubeLogs 1.11
287 TestNetworkPlugins/group/kindnet/Start 76.64
288 TestNoKubernetes/serial/Start 44.1
289 TestPause/serial/Pause 0.61
290 TestPause/serial/VerifyStatus 0.26
291 TestPause/serial/Unpause 0.56
292 TestPause/serial/PauseAgain 0.77
293 TestPause/serial/DeletePaused 0.85
294 TestPause/serial/VerifyDeletedResources 1.31
295 TestNetworkPlugins/group/calico/Start 96.84
296 TestNoKubernetes/serial/VerifyK8sNotRunning 0.23
297 TestNoKubernetes/serial/ProfileList 1.22
298 TestNoKubernetes/serial/Stop 2.3
299 TestNoKubernetes/serial/StartNoArgs 45.69
300 TestNetworkPlugins/group/auto/KubeletFlags 0.21
301 TestNetworkPlugins/group/auto/NetCatPod 10.25
302 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
303 TestNetworkPlugins/group/auto/DNS 0.16
304 TestNetworkPlugins/group/auto/Localhost 0.14
305 TestNetworkPlugins/group/auto/HairPin 0.13
306 TestNetworkPlugins/group/kindnet/KubeletFlags 0.23
307 TestNetworkPlugins/group/kindnet/NetCatPod 14.29
308 TestNetworkPlugins/group/custom-flannel/Start 76.18
309 TestNetworkPlugins/group/kindnet/DNS 0.18
310 TestNetworkPlugins/group/kindnet/Localhost 0.15
311 TestNetworkPlugins/group/kindnet/HairPin 0.16
312 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.25
313 TestNetworkPlugins/group/false/Start 88.84
314 TestNetworkPlugins/group/enable-default-cni/Start 109.05
315 TestNetworkPlugins/group/calico/ControllerPod 6.01
316 TestNetworkPlugins/group/calico/KubeletFlags 0.21
317 TestNetworkPlugins/group/calico/NetCatPod 10.25
318 TestNetworkPlugins/group/calico/DNS 0.22
319 TestNetworkPlugins/group/calico/Localhost 0.19
320 TestNetworkPlugins/group/calico/HairPin 0.24
321 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.23
322 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.32
323 TestNetworkPlugins/group/flannel/Start 77.61
324 TestNetworkPlugins/group/custom-flannel/DNS 0.21
325 TestNetworkPlugins/group/custom-flannel/Localhost 0.19
326 TestNetworkPlugins/group/custom-flannel/HairPin 0.2
327 TestNetworkPlugins/group/false/KubeletFlags 0.22
328 TestNetworkPlugins/group/false/NetCatPod 12.25
329 TestNetworkPlugins/group/bridge/Start 104.13
330 TestNetworkPlugins/group/false/DNS 0.21
331 TestNetworkPlugins/group/false/Localhost 0.16
332 TestNetworkPlugins/group/false/HairPin 0.15
333 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.22
334 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.25
335 TestNetworkPlugins/group/kubenet/Start 80.27
336 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
337 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
338 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
340 TestStartStop/group/old-k8s-version/serial/FirstStart 194.09
341 TestNetworkPlugins/group/flannel/ControllerPod 6.01
342 TestNetworkPlugins/group/flannel/KubeletFlags 0.21
343 TestNetworkPlugins/group/flannel/NetCatPod 10.29
344 TestNetworkPlugins/group/flannel/DNS 0.19
345 TestNetworkPlugins/group/flannel/Localhost 0.14
346 TestNetworkPlugins/group/flannel/HairPin 0.19
348 TestStartStop/group/no-preload/serial/FirstStart 90.31
349 TestNetworkPlugins/group/kubenet/KubeletFlags 0.24
350 TestNetworkPlugins/group/kubenet/NetCatPod 13.28
351 TestNetworkPlugins/group/bridge/KubeletFlags 0.26
352 TestNetworkPlugins/group/bridge/NetCatPod 13.3
353 TestNetworkPlugins/group/kubenet/DNS 0.17
354 TestNetworkPlugins/group/kubenet/Localhost 0.14
355 TestNetworkPlugins/group/kubenet/HairPin 0.15
356 TestNetworkPlugins/group/bridge/DNS 0.25
357 TestNetworkPlugins/group/bridge/Localhost 0.17
358 TestNetworkPlugins/group/bridge/HairPin 0.17
360 TestStartStop/group/embed-certs/serial/FirstStart 101.91
362 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 126.57
363 TestStartStop/group/no-preload/serial/DeployApp 10.37
364 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.04
365 TestStartStop/group/no-preload/serial/Stop 13.88
366 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.3
367 TestStartStop/group/no-preload/serial/SecondStart 309.25
368 TestStartStop/group/embed-certs/serial/DeployApp 9.33
369 TestStartStop/group/old-k8s-version/serial/DeployApp 9.49
370 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1
371 TestStartStop/group/embed-certs/serial/Stop 12.68
372 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.89
373 TestStartStop/group/old-k8s-version/serial/Stop 12.64
374 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
375 TestStartStop/group/embed-certs/serial/SecondStart 304.81
376 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.33
377 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
378 TestStartStop/group/old-k8s-version/serial/SecondStart 422.03
379 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.98
380 TestStartStop/group/default-k8s-diff-port/serial/Stop 13.34
381 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.23
382 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 315.04
383 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
384 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
385 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.2
386 TestStartStop/group/no-preload/serial/Pause 2.44
388 TestStartStop/group/newest-cni/serial/FirstStart 62.29
389 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
390 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
391 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.22
392 TestStartStop/group/embed-certs/serial/Pause 2.62
393 TestStartStop/group/newest-cni/serial/DeployApp 0
394 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.87
395 TestStartStop/group/newest-cni/serial/Stop 8.33
396 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
397 TestStartStop/group/newest-cni/serial/SecondStart 38.27
398 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
399 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
400 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.2
401 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.37
402 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
403 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
404 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.29
405 TestStartStop/group/newest-cni/serial/Pause 2.47
406 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
407 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.07
408 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.2
409 TestStartStop/group/old-k8s-version/serial/Pause 2.25
x
+
TestDownloadOnly/v1.20.0/json-events (7.67s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-564069 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-564069 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=kvm2 : (7.672871939s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (7.67s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-564069
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-564069: exit status 85 (57.441233ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-564069 | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC |          |
	|         | -p download-only-564069        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/29 18:05:20
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0829 18:05:20.677816   20262 out.go:345] Setting OutFile to fd 1 ...
	I0829 18:05:20.677935   20262 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:05:20.677945   20262 out.go:358] Setting ErrFile to fd 2...
	I0829 18:05:20.677952   20262 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:05:20.678168   20262 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-13071/.minikube/bin
	W0829 18:05:20.678315   20262 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19531-13071/.minikube/config/config.json: open /home/jenkins/minikube-integration/19531-13071/.minikube/config/config.json: no such file or directory
	I0829 18:05:20.678907   20262 out.go:352] Setting JSON to true
	I0829 18:05:20.679822   20262 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":2869,"bootTime":1724951852,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 18:05:20.679879   20262 start.go:139] virtualization: kvm guest
	I0829 18:05:20.682169   20262 out.go:97] [download-only-564069] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0829 18:05:20.682264   20262 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19531-13071/.minikube/cache/preloaded-tarball: no such file or directory
	I0829 18:05:20.682344   20262 notify.go:220] Checking for updates...
	I0829 18:05:20.683640   20262 out.go:169] MINIKUBE_LOCATION=19531
	I0829 18:05:20.685095   20262 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 18:05:20.686615   20262 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19531-13071/kubeconfig
	I0829 18:05:20.688016   20262 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-13071/.minikube
	I0829 18:05:20.689336   20262 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0829 18:05:20.691741   20262 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0829 18:05:20.691962   20262 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 18:05:20.793773   20262 out.go:97] Using the kvm2 driver based on user configuration
	I0829 18:05:20.793813   20262 start.go:297] selected driver: kvm2
	I0829 18:05:20.793821   20262 start.go:901] validating driver "kvm2" against <nil>
	I0829 18:05:20.794189   20262 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 18:05:20.794343   20262 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19531-13071/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0829 18:05:20.808786   20262 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0829 18:05:20.808829   20262 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0829 18:05:20.809289   20262 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0829 18:05:20.809430   20262 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0829 18:05:20.809487   20262 cni.go:84] Creating CNI manager for ""
	I0829 18:05:20.809503   20262 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0829 18:05:20.809554   20262 start.go:340] cluster config:
	{Name:download-only-564069 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-564069 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 18:05:20.809725   20262 iso.go:125] acquiring lock: {Name:mk111510bb887618e1358eefed89382b2a0d6da2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 18:05:20.811579   20262 out.go:97] Downloading VM boot image ...
	I0829 18:05:20.811614   20262 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19531-13071/.minikube/cache/iso/amd64/minikube-v1.33.1-1724775098-19521-amd64.iso
	I0829 18:05:23.821413   20262 out.go:97] Starting "download-only-564069" primary control-plane node in "download-only-564069" cluster
	I0829 18:05:23.821432   20262 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0829 18:05:23.869283   20262 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0829 18:05:23.869309   20262 cache.go:56] Caching tarball of preloaded images
	I0829 18:05:23.869482   20262 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0829 18:05:23.871462   20262 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0829 18:05:23.871496   20262 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0829 18:05:23.905035   20262 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /home/jenkins/minikube-integration/19531-13071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-564069 host does not exist
	  To start a cluster, run: "minikube start -p download-only-564069"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-564069
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (3.7s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-398580 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-398580 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=kvm2 : (3.697574707s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (3.70s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-398580
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-398580: exit status 85 (56.91645ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-564069 | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC |                     |
	|         | -p download-only-564069        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC | 29 Aug 24 18:05 UTC |
	| delete  | -p download-only-564069        | download-only-564069 | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC | 29 Aug 24 18:05 UTC |
	| start   | -o=json --download-only        | download-only-398580 | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC |                     |
	|         | -p download-only-398580        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/29 18:05:28
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0829 18:05:28.672204   20471 out.go:345] Setting OutFile to fd 1 ...
	I0829 18:05:28.672319   20471 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:05:28.672328   20471 out.go:358] Setting ErrFile to fd 2...
	I0829 18:05:28.672332   20471 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:05:28.672544   20471 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-13071/.minikube/bin
	I0829 18:05:28.673155   20471 out.go:352] Setting JSON to true
	I0829 18:05:28.673960   20471 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":2877,"bootTime":1724951852,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 18:05:28.674043   20471 start.go:139] virtualization: kvm guest
	I0829 18:05:28.676306   20471 out.go:97] [download-only-398580] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0829 18:05:28.676467   20471 notify.go:220] Checking for updates...
	I0829 18:05:28.678059   20471 out.go:169] MINIKUBE_LOCATION=19531
	I0829 18:05:28.679702   20471 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 18:05:28.681265   20471 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19531-13071/kubeconfig
	I0829 18:05:28.682594   20471 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-13071/.minikube
	I0829 18:05:28.683844   20471 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-398580 host does not exist
	  To start a cluster, run: "minikube start -p download-only-398580"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-398580
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.6s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-683228 --alsologtostderr --binary-mirror http://127.0.0.1:36491 --driver=kvm2 
helpers_test.go:175: Cleaning up "binary-mirror-683228" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-683228
--- PASS: TestBinaryMirror (0.60s)

                                                
                                    
x
+
TestOffline (65.09s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-957156 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-957156 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 : (1m4.074914826s)
helpers_test.go:175: Cleaning up "offline-docker-957156" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-957156
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-957156: (1.01794439s)
--- PASS: TestOffline (65.09s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-661794
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-661794: exit status 85 (46.336553ms)

                                                
                                                
-- stdout --
	* Profile "addons-661794" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-661794"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-661794
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-661794: exit status 85 (46.854119ms)

                                                
                                                
-- stdout --
	* Profile "addons-661794" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-661794"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (218.3s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-661794 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-661794 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m38.304386382s)
--- PASS: TestAddons/Setup (218.30s)

                                                
                                    
x
+
TestAddons/serial/Volcano (41.75s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:913: volcano-controller stabilized in 15.375781ms
addons_test.go:897: volcano-scheduler stabilized in 16.718708ms
addons_test.go:905: volcano-admission stabilized in 20.42455ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-cv5c6" [4d6dc843-3a16-428d-a4cb-1081173eb950] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003874226s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-zx8k2" [4e3d314e-db0e-436c-828b-88cd047c8d12] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.004215668s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-6d5pn" [2f63cc23-72c9-43ea-af17-8c1a0af3edb3] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.005356503s
addons_test.go:932: (dbg) Run:  kubectl --context addons-661794 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-661794 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-661794 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [eb2af1f5-bca1-47ed-9c9a-56d8ff739a62] Pending
helpers_test.go:344: "test-job-nginx-0" [eb2af1f5-bca1-47ed-9c9a-56d8ff739a62] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [eb2af1f5-bca1-47ed-9c9a-56d8ff739a62] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 15.004279154s
addons_test.go:968: (dbg) Run:  out/minikube-linux-amd64 -p addons-661794 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-linux-amd64 -p addons-661794 addons disable volcano --alsologtostderr -v=1: (10.377480929s)
--- PASS: TestAddons/serial/Volcano (41.75s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-661794 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-661794 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (21.07s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-661794 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-661794 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-661794 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [cf30def2-f64d-43c5-b918-5a690ac977dd] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [cf30def2-f64d-43c5-b918-5a690ac977dd] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.004335042s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-661794 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-661794 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-661794 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.206
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-661794 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-661794 addons disable ingress-dns --alsologtostderr -v=1: (1.25126694s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-661794 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-661794 addons disable ingress --alsologtostderr -v=1: (7.63757169s)
--- PASS: TestAddons/parallel/Ingress (21.07s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.86s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-q64r5" [9be6a5f8-7516-4e96-825c-74ff8c041e4b] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.005231395s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-661794
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-661794: (5.851925852s)
--- PASS: TestAddons/parallel/InspektorGadget (11.86s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.7s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 4.864655ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-8988944d9-g5zvg" [9052b9fb-3bb0-4669-ba65-8897461bb1b6] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004838291s
addons_test.go:417: (dbg) Run:  kubectl --context addons-661794 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-661794 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.70s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (16.25s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 4.87846ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-8h5wn" [30921fdb-1918-49e2-b8bf-a71f0a3cfc73] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.003680933s
addons_test.go:475: (dbg) Run:  kubectl --context addons-661794 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-661794 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (6.489509326s)
addons_test.go:480: kubectl --context addons-661794 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
warning: couldn't attach to pod/helm-test, falling back to streaming logs: error stream protocol error: unknown error
addons_test.go:475: (dbg) Run:  kubectl --context addons-661794 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-661794 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (2.441356017s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-661794 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (16.25s)

                                                
                                    
x
+
TestAddons/parallel/CSI (40.26s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 8.483078ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-661794 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661794 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661794 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661794 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-661794 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [3f000cd2-a7c7-42aa-bee2-afcf9577890f] Pending
helpers_test.go:344: "task-pv-pod" [3f000cd2-a7c7-42aa-bee2-afcf9577890f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [3f000cd2-a7c7-42aa-bee2-afcf9577890f] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.004431902s
addons_test.go:590: (dbg) Run:  kubectl --context addons-661794 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-661794 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-661794 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-661794 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-661794 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-661794 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661794 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661794 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661794 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661794 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661794 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661794 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661794 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661794 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661794 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661794 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661794 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661794 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661794 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-661794 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [a9eaa5a1-018e-49e9-807f-6cb84ac41d8a] Pending
helpers_test.go:344: "task-pv-pod-restore" [a9eaa5a1-018e-49e9-807f-6cb84ac41d8a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [a9eaa5a1-018e-49e9-807f-6cb84ac41d8a] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004633299s
addons_test.go:632: (dbg) Run:  kubectl --context addons-661794 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Done: kubectl --context addons-661794 delete pod task-pv-pod-restore: (1.096034207s)
addons_test.go:636: (dbg) Run:  kubectl --context addons-661794 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-661794 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-661794 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p addons-661794 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.750463888s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p addons-661794 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:648: (dbg) Done: out/minikube-linux-amd64 -p addons-661794 addons disable volumesnapshots --alsologtostderr -v=1: (1.105104495s)
--- PASS: TestAddons/parallel/CSI (40.26s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.45s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-661794 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-zmbw7" [5bdf9f88-1afa-4f51-af05-e90735d4e2ff] Pending
helpers_test.go:344: "headlamp-57fb76fcdb-zmbw7" [5bdf9f88-1afa-4f51-af05-e90735d4e2ff] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-zmbw7" [5bdf9f88-1afa-4f51-af05-e90735d4e2ff] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.003816853s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-661794 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 -p addons-661794 addons disable headlamp --alsologtostderr -v=1: (5.62507126s)
--- PASS: TestAddons/parallel/Headlamp (18.45s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.53s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-b625b" [2e6e3e15-ee1c-4f79-a491-31df681938dd] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004050073s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-661794
--- PASS: TestAddons/parallel/CloudSpanner (5.53s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (54.17s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-661794 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-661794 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661794 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661794 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661794 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661794 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661794 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661794 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [08f088dd-3551-4735-8190-af1154c62c57] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [08f088dd-3551-4735-8190-af1154c62c57] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [08f088dd-3551-4735-8190-af1154c62c57] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.004013738s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-661794 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-amd64 -p addons-661794 ssh "cat /opt/local-path-provisioner/pvc-8f8191a6-b4a1-450c-abbd-925f066b3f23_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-661794 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-661794 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p addons-661794 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-amd64 -p addons-661794 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.31179192s)
--- PASS: TestAddons/parallel/LocalPath (54.17s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.43s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-lllsx" [caac2bdb-3473-4c6c-b776-5d7733ef7b03] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.008978806s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-661794
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.43s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.83s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-2vsmk" [8047aabf-7c86-4fad-a1b0-55227553cfb0] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.006643555s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-661794 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-661794 addons disable yakd --alsologtostderr -v=1: (5.826921439s)
--- PASS: TestAddons/parallel/Yakd (10.83s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (8.55s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-661794
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-661794: (8.280179319s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-661794
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-661794
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-661794
--- PASS: TestAddons/StoppedEnableDisable (8.55s)

                                                
                                    
x
+
TestCertOptions (68.63s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-841518 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 
E0829 19:06:31.539753   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/skaffold-231822/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-841518 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 : (1m7.064362245s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-841518 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-841518 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-841518 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-841518" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-841518
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-841518: (1.117706654s)
--- PASS: TestCertOptions (68.63s)

                                                
                                    
x
+
TestCertExpiration (347.3s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-889676 --memory=2048 --cert-expiration=3m --driver=kvm2 
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-889676 --memory=2048 --cert-expiration=3m --driver=kvm2 : (1m43.202333146s)
E0829 19:02:54.055355   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/functional-558069/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-889676 --memory=2048 --cert-expiration=8760h --driver=kvm2 
E0829 19:05:50.560927   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/skaffold-231822/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:05:50.567324   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/skaffold-231822/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:05:50.578739   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/skaffold-231822/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:05:50.600413   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/skaffold-231822/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:05:50.642031   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/skaffold-231822/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:05:50.724354   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/skaffold-231822/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:05:50.886087   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/skaffold-231822/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:05:51.207713   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/skaffold-231822/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:05:51.850028   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/skaffold-231822/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:05:53.131670   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/skaffold-231822/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:05:55.694016   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/skaffold-231822/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:06:00.816184   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/skaffold-231822/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:06:11.057581   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/skaffold-231822/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-889676 --memory=2048 --cert-expiration=8760h --driver=kvm2 : (1m2.862971511s)
helpers_test.go:175: Cleaning up "cert-expiration-889676" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-889676
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-889676: (1.230407573s)
--- PASS: TestCertExpiration (347.30s)

                                                
                                    
x
+
TestDockerFlags (103.9s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-411904 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-411904 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 : (1m42.354970521s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-411904 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-411904 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-411904" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-411904
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-411904: (1.098354632s)
--- PASS: TestDockerFlags (103.90s)

                                                
                                    
x
+
TestForceSystemdFlag (82.08s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-110418 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-110418 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 : (1m21.027241243s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-110418 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-110418" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-110418
--- PASS: TestForceSystemdFlag (82.08s)

                                                
                                    
x
+
TestForceSystemdEnv (114.01s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-940359 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-940359 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 : (1m52.69782361s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-940359 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-940359" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-940359
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-940359: (1.039937108s)
--- PASS: TestForceSystemdEnv (114.01s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.93s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (3.93s)

                                                
                                    
x
+
TestErrorSpam/setup (50.75s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-712075 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-712075 --driver=kvm2 
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-712075 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-712075 --driver=kvm2 : (50.751017339s)
--- PASS: TestErrorSpam/setup (50.75s)

                                                
                                    
x
+
TestErrorSpam/start (0.35s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-712075 --log_dir /tmp/nospam-712075 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-712075 --log_dir /tmp/nospam-712075 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-712075 --log_dir /tmp/nospam-712075 start --dry-run
--- PASS: TestErrorSpam/start (0.35s)

                                                
                                    
x
+
TestErrorSpam/status (0.72s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-712075 --log_dir /tmp/nospam-712075 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-712075 --log_dir /tmp/nospam-712075 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-712075 --log_dir /tmp/nospam-712075 status
--- PASS: TestErrorSpam/status (0.72s)

                                                
                                    
x
+
TestErrorSpam/pause (1.23s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-712075 --log_dir /tmp/nospam-712075 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-712075 --log_dir /tmp/nospam-712075 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-712075 --log_dir /tmp/nospam-712075 pause
--- PASS: TestErrorSpam/pause (1.23s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.4s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-712075 --log_dir /tmp/nospam-712075 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-712075 --log_dir /tmp/nospam-712075 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-712075 --log_dir /tmp/nospam-712075 unpause
--- PASS: TestErrorSpam/unpause (1.40s)

                                                
                                    
x
+
TestErrorSpam/stop (6.93s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-712075 --log_dir /tmp/nospam-712075 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-712075 --log_dir /tmp/nospam-712075 stop: (3.526734895s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-712075 --log_dir /tmp/nospam-712075 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-712075 --log_dir /tmp/nospam-712075 stop: (1.835962035s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-712075 --log_dir /tmp/nospam-712075 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-712075 --log_dir /tmp/nospam-712075 stop: (1.567137887s)
--- PASS: TestErrorSpam/stop (6.93s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19531-13071/.minikube/files/etc/test/nested/copy/20250/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (57.97s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-558069 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-558069 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 : (57.969457575s)
--- PASS: TestFunctional/serial/StartWithProxy (57.97s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (39.72s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-558069 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-558069 --alsologtostderr -v=8: (39.723291709s)
functional_test.go:663: soft start took 39.723992198s for "functional-558069" cluster.
--- PASS: TestFunctional/serial/SoftStart (39.72s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-558069 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-558069 /tmp/TestFunctionalserialCacheCmdcacheadd_local1002886524/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 cache add minikube-local-cache-test:functional-558069
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 cache delete minikube-local-cache-test:functional-558069
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-558069
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-558069 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (214.689303ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 kubectl -- --context functional-558069 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-558069 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.53s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-558069 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-558069 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.527033498s)
functional_test.go:761: restart took 41.527145602s for "functional-558069" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (41.53s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-558069 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-558069 logs: (1.001991035s)
--- PASS: TestFunctional/serial/LogsCmd (1.00s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.02s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 logs --file /tmp/TestFunctionalserialLogsFileCmd1704910621/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-558069 logs --file /tmp/TestFunctionalserialLogsFileCmd1704910621/001/logs.txt: (1.015135885s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.02s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.82s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-558069 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-558069
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-558069: exit status 115 (262.809594ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.97:30394 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-558069 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-558069 delete -f testdata/invalidsvc.yaml: (1.363369345s)
--- PASS: TestFunctional/serial/InvalidService (4.82s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-558069 config get cpus: exit status 14 (53.830805ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-558069 config get cpus: exit status 14 (51.25067ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (24.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-558069 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-558069 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 29962: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (24.98s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-558069 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-558069 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (138.799907ms)

                                                
                                                
-- stdout --
	* [functional-558069] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19531
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19531-13071/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-13071/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 18:23:02.652367   29845 out.go:345] Setting OutFile to fd 1 ...
	I0829 18:23:02.652508   29845 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:23:02.652521   29845 out.go:358] Setting ErrFile to fd 2...
	I0829 18:23:02.652527   29845 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:23:02.652837   29845 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-13071/.minikube/bin
	I0829 18:23:02.653536   29845 out.go:352] Setting JSON to false
	I0829 18:23:02.654690   29845 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":3931,"bootTime":1724951852,"procs":255,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 18:23:02.654748   29845 start.go:139] virtualization: kvm guest
	I0829 18:23:02.656695   29845 out.go:177] * [functional-558069] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0829 18:23:02.657967   29845 notify.go:220] Checking for updates...
	I0829 18:23:02.657995   29845 out.go:177]   - MINIKUBE_LOCATION=19531
	I0829 18:23:02.659614   29845 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 18:23:02.660987   29845 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19531-13071/kubeconfig
	I0829 18:23:02.662246   29845 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-13071/.minikube
	I0829 18:23:02.663462   29845 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0829 18:23:02.664698   29845 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 18:23:02.666064   29845 config.go:182] Loaded profile config "functional-558069": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 18:23:02.666508   29845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0829 18:23:02.666566   29845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:23:02.682022   29845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39533
	I0829 18:23:02.682528   29845 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:23:02.683046   29845 main.go:141] libmachine: Using API Version  1
	I0829 18:23:02.683066   29845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:23:02.683385   29845 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:23:02.683555   29845 main.go:141] libmachine: (functional-558069) Calling .DriverName
	I0829 18:23:02.683787   29845 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 18:23:02.684106   29845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0829 18:23:02.684149   29845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:23:02.699292   29845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33269
	I0829 18:23:02.699797   29845 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:23:02.700284   29845 main.go:141] libmachine: Using API Version  1
	I0829 18:23:02.700312   29845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:23:02.700713   29845 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:23:02.700875   29845 main.go:141] libmachine: (functional-558069) Calling .DriverName
	I0829 18:23:02.735377   29845 out.go:177] * Using the kvm2 driver based on existing profile
	I0829 18:23:02.736575   29845 start.go:297] selected driver: kvm2
	I0829 18:23:02.736587   29845 start.go:901] validating driver "kvm2" against &{Name:functional-558069 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:functional-558069 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.97 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 18:23:02.736689   29845 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 18:23:02.738632   29845 out.go:201] 
	W0829 18:23:02.739797   29845 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0829 18:23:02.741014   29845 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-558069 --dry-run --alsologtostderr -v=1 --driver=kvm2 
--- PASS: TestFunctional/parallel/DryRun (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-558069 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-558069 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (143.871693ms)

                                                
                                                
-- stdout --
	* [functional-558069] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19531
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19531-13071/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-13071/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 18:23:02.511611   29818 out.go:345] Setting OutFile to fd 1 ...
	I0829 18:23:02.511738   29818 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:23:02.511749   29818 out.go:358] Setting ErrFile to fd 2...
	I0829 18:23:02.511755   29818 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:23:02.512158   29818 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-13071/.minikube/bin
	I0829 18:23:02.512734   29818 out.go:352] Setting JSON to false
	I0829 18:23:02.513838   29818 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":3930,"bootTime":1724951852,"procs":253,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 18:23:02.513918   29818 start.go:139] virtualization: kvm guest
	I0829 18:23:02.516418   29818 out.go:177] * [functional-558069] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0829 18:23:02.517897   29818 notify.go:220] Checking for updates...
	I0829 18:23:02.517905   29818 out.go:177]   - MINIKUBE_LOCATION=19531
	I0829 18:23:02.519122   29818 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 18:23:02.520179   29818 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19531-13071/kubeconfig
	I0829 18:23:02.521512   29818 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-13071/.minikube
	I0829 18:23:02.522995   29818 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0829 18:23:02.524337   29818 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 18:23:02.526044   29818 config.go:182] Loaded profile config "functional-558069": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 18:23:02.526656   29818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0829 18:23:02.526714   29818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:23:02.542439   29818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42665
	I0829 18:23:02.543198   29818 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:23:02.543743   29818 main.go:141] libmachine: Using API Version  1
	I0829 18:23:02.543761   29818 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:23:02.544087   29818 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:23:02.544256   29818 main.go:141] libmachine: (functional-558069) Calling .DriverName
	I0829 18:23:02.544521   29818 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 18:23:02.544942   29818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0829 18:23:02.544985   29818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:23:02.560247   29818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44595
	I0829 18:23:02.560679   29818 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:23:02.561187   29818 main.go:141] libmachine: Using API Version  1
	I0829 18:23:02.561217   29818 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:23:02.561545   29818 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:23:02.561775   29818 main.go:141] libmachine: (functional-558069) Calling .DriverName
	I0829 18:23:02.595337   29818 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0829 18:23:02.596748   29818 start.go:297] selected driver: kvm2
	I0829 18:23:02.596768   29818 start.go:901] validating driver "kvm2" against &{Name:functional-558069 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:functional-558069 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.97 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 18:23:02.596897   29818 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 18:23:02.598915   29818 out.go:201] 
	W0829 18:23:02.600272   29818 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0829 18:23:02.601632   29818 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-558069 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-558069 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-5j4x2" [e38cc552-11b4-422c-8771-975ef5b97f5e] Pending
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-5j4x2" [e38cc552-11b4-422c-8771-975ef5b97f5e] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-5j4x2" [e38cc552-11b4-422c-8771-975ef5b97f5e] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.004292061s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.97:31080
functional_test.go:1675: http://192.168.39.97:31080: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-5j4x2

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.97:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.97:31080
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.70s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (40.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [5446aec6-d37c-4c20-b3f3-73db22726b81] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003728514s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-558069 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-558069 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-558069 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-558069 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [f6848597-e89b-4a0b-84b7-f4479335a344] Pending
helpers_test.go:344: "sp-pod" [f6848597-e89b-4a0b-84b7-f4479335a344] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [f6848597-e89b-4a0b-84b7-f4479335a344] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 25.003839043s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-558069 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-558069 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-558069 delete -f testdata/storage-provisioner/pod.yaml: (1.294133099s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-558069 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [696d7b94-9269-43bc-9631-4b86072613db] Pending
helpers_test.go:344: "sp-pod" [696d7b94-9269-43bc-9631-4b86072613db] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [696d7b94-9269-43bc-9631-4b86072613db] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.0043715s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-558069 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (40.11s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 ssh -n functional-558069 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 cp functional-558069:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1495966376/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 ssh -n functional-558069 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 ssh -n functional-558069 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (31.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-558069 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-t4hs5" [c511e203-0a86-4fa1-b879-bde14299d919] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-t4hs5" [c511e203-0a86-4fa1-b879-bde14299d919] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 26.005113716s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-558069 exec mysql-6cdb49bbb-t4hs5 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-558069 exec mysql-6cdb49bbb-t4hs5 -- mysql -ppassword -e "show databases;": exit status 1 (262.781462ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-558069 exec mysql-6cdb49bbb-t4hs5 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-558069 exec mysql-6cdb49bbb-t4hs5 -- mysql -ppassword -e "show databases;": exit status 1 (496.309714ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-558069 exec mysql-6cdb49bbb-t4hs5 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-558069 exec mysql-6cdb49bbb-t4hs5 -- mysql -ppassword -e "show databases;": exit status 1 (167.51688ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-558069 exec mysql-6cdb49bbb-t4hs5 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (31.40s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/20250/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 ssh "sudo cat /etc/test/nested/copy/20250/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/20250.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 ssh "sudo cat /etc/ssl/certs/20250.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/20250.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 ssh "sudo cat /usr/share/ca-certificates/20250.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/202502.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 ssh "sudo cat /etc/ssl/certs/202502.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/202502.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 ssh "sudo cat /usr/share/ca-certificates/202502.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-558069 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-558069 ssh "sudo systemctl is-active crio": exit status 1 (239.24428ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-558069 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-558069 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-nkksz" [f71674c8-7df5-4b5c-902c-37fbfb23fe29] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-nkksz" [f71674c8-7df5-4b5c-902c-37fbfb23fe29] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.018559805s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.24s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 version -o=json --components
2024/08/29 18:23:27 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/Version/components (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-558069 docker-env) && out/minikube-linux-amd64 status -p functional-558069"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-558069 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-558069 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-558069
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-558069
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-558069 image ls --format short --alsologtostderr:
I0829 18:23:27.879569   30953 out.go:345] Setting OutFile to fd 1 ...
I0829 18:23:27.879683   30953 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 18:23:27.879693   30953 out.go:358] Setting ErrFile to fd 2...
I0829 18:23:27.879697   30953 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 18:23:27.879891   30953 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-13071/.minikube/bin
I0829 18:23:27.880431   30953 config.go:182] Loaded profile config "functional-558069": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0829 18:23:27.880520   30953 config.go:182] Loaded profile config "functional-558069": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0829 18:23:27.880892   30953 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0829 18:23:27.880935   30953 main.go:141] libmachine: Launching plugin server for driver kvm2
I0829 18:23:27.898131   30953 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43433
I0829 18:23:27.898662   30953 main.go:141] libmachine: () Calling .GetVersion
I0829 18:23:27.899231   30953 main.go:141] libmachine: Using API Version  1
I0829 18:23:27.899252   30953 main.go:141] libmachine: () Calling .SetConfigRaw
I0829 18:23:27.899570   30953 main.go:141] libmachine: () Calling .GetMachineName
I0829 18:23:27.899754   30953 main.go:141] libmachine: (functional-558069) Calling .GetState
I0829 18:23:27.901691   30953 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0829 18:23:27.901737   30953 main.go:141] libmachine: Launching plugin server for driver kvm2
I0829 18:23:27.916448   30953 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46071
I0829 18:23:27.916896   30953 main.go:141] libmachine: () Calling .GetVersion
I0829 18:23:27.917361   30953 main.go:141] libmachine: Using API Version  1
I0829 18:23:27.917376   30953 main.go:141] libmachine: () Calling .SetConfigRaw
I0829 18:23:27.917684   30953 main.go:141] libmachine: () Calling .GetMachineName
I0829 18:23:27.917881   30953 main.go:141] libmachine: (functional-558069) Calling .DriverName
I0829 18:23:27.918082   30953 ssh_runner.go:195] Run: systemctl --version
I0829 18:23:27.918107   30953 main.go:141] libmachine: (functional-558069) Calling .GetSSHHostname
I0829 18:23:27.920986   30953 main.go:141] libmachine: (functional-558069) DBG | domain functional-558069 has defined MAC address 52:54:00:dd:e7:ab in network mk-functional-558069
I0829 18:23:27.921415   30953 main.go:141] libmachine: (functional-558069) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:e7:ab", ip: ""} in network mk-functional-558069: {Iface:virbr1 ExpiryTime:2024-08-29 19:20:36 +0000 UTC Type:0 Mac:52:54:00:dd:e7:ab Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:functional-558069 Clientid:01:52:54:00:dd:e7:ab}
I0829 18:23:27.921445   30953 main.go:141] libmachine: (functional-558069) DBG | domain functional-558069 has defined IP address 192.168.39.97 and MAC address 52:54:00:dd:e7:ab in network mk-functional-558069
I0829 18:23:27.921650   30953 main.go:141] libmachine: (functional-558069) Calling .GetSSHPort
I0829 18:23:27.921796   30953 main.go:141] libmachine: (functional-558069) Calling .GetSSHKeyPath
I0829 18:23:27.921905   30953 main.go:141] libmachine: (functional-558069) Calling .GetSSHUsername
I0829 18:23:27.922029   30953 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13071/.minikube/machines/functional-558069/id_rsa Username:docker}
I0829 18:23:28.012315   30953 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0829 18:23:28.056050   30953 main.go:141] libmachine: Making call to close driver server
I0829 18:23:28.056068   30953 main.go:141] libmachine: (functional-558069) Calling .Close
I0829 18:23:28.056331   30953 main.go:141] libmachine: Successfully made call to close driver server
I0829 18:23:28.056357   30953 main.go:141] libmachine: Making call to close connection to plugin binary
I0829 18:23:28.056360   30953 main.go:141] libmachine: (functional-558069) DBG | Closing plugin on server side
I0829 18:23:28.056370   30953 main.go:141] libmachine: Making call to close driver server
I0829 18:23:28.056379   30953 main.go:141] libmachine: (functional-558069) Calling .Close
I0829 18:23:28.056642   30953 main.go:141] libmachine: Successfully made call to close driver server
I0829 18:23:28.056688   30953 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-558069 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-controller-manager     | v1.31.0           | 045733566833c | 88.4MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/kube-apiserver              | v1.31.0           | 604f5db92eaa8 | 94.2MB |
| registry.k8s.io/kube-proxy                  | v1.31.0           | ad83b2ca7b09e | 91.5MB |
| registry.k8s.io/etcd                        | 3.5.15-0          | 2e96e5913fc06 | 148MB  |
| registry.k8s.io/pause                       | 3.10              | 873ed75102791 | 736kB  |
| registry.k8s.io/coredns/coredns             | v1.11.1           | cbb01a7bd410d | 59.8MB |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| docker.io/kicbase/echo-server               | functional-558069 | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/minikube-local-cache-test | functional-558069 | 124a9ba0ab4ee | 30B    |
| registry.k8s.io/kube-scheduler              | v1.31.0           | 1766f54c897f0 | 67.4MB |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-558069 image ls --format table --alsologtostderr:
I0829 18:23:28.364086   31065 out.go:345] Setting OutFile to fd 1 ...
I0829 18:23:28.364192   31065 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 18:23:28.364204   31065 out.go:358] Setting ErrFile to fd 2...
I0829 18:23:28.364210   31065 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 18:23:28.364432   31065 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-13071/.minikube/bin
I0829 18:23:28.365023   31065 config.go:182] Loaded profile config "functional-558069": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0829 18:23:28.365133   31065 config.go:182] Loaded profile config "functional-558069": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0829 18:23:28.365546   31065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0829 18:23:28.365589   31065 main.go:141] libmachine: Launching plugin server for driver kvm2
I0829 18:23:28.380513   31065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44519
I0829 18:23:28.380993   31065 main.go:141] libmachine: () Calling .GetVersion
I0829 18:23:28.381626   31065 main.go:141] libmachine: Using API Version  1
I0829 18:23:28.381651   31065 main.go:141] libmachine: () Calling .SetConfigRaw
I0829 18:23:28.381928   31065 main.go:141] libmachine: () Calling .GetMachineName
I0829 18:23:28.382114   31065 main.go:141] libmachine: (functional-558069) Calling .GetState
I0829 18:23:28.384081   31065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0829 18:23:28.384139   31065 main.go:141] libmachine: Launching plugin server for driver kvm2
I0829 18:23:28.398993   31065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44669
I0829 18:23:28.399399   31065 main.go:141] libmachine: () Calling .GetVersion
I0829 18:23:28.399851   31065 main.go:141] libmachine: Using API Version  1
I0829 18:23:28.399880   31065 main.go:141] libmachine: () Calling .SetConfigRaw
I0829 18:23:28.400265   31065 main.go:141] libmachine: () Calling .GetMachineName
I0829 18:23:28.400436   31065 main.go:141] libmachine: (functional-558069) Calling .DriverName
I0829 18:23:28.400624   31065 ssh_runner.go:195] Run: systemctl --version
I0829 18:23:28.400654   31065 main.go:141] libmachine: (functional-558069) Calling .GetSSHHostname
I0829 18:23:28.403908   31065 main.go:141] libmachine: (functional-558069) DBG | domain functional-558069 has defined MAC address 52:54:00:dd:e7:ab in network mk-functional-558069
I0829 18:23:28.404274   31065 main.go:141] libmachine: (functional-558069) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:e7:ab", ip: ""} in network mk-functional-558069: {Iface:virbr1 ExpiryTime:2024-08-29 19:20:36 +0000 UTC Type:0 Mac:52:54:00:dd:e7:ab Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:functional-558069 Clientid:01:52:54:00:dd:e7:ab}
I0829 18:23:28.404310   31065 main.go:141] libmachine: (functional-558069) DBG | domain functional-558069 has defined IP address 192.168.39.97 and MAC address 52:54:00:dd:e7:ab in network mk-functional-558069
I0829 18:23:28.404457   31065 main.go:141] libmachine: (functional-558069) Calling .GetSSHPort
I0829 18:23:28.404641   31065 main.go:141] libmachine: (functional-558069) Calling .GetSSHKeyPath
I0829 18:23:28.404799   31065 main.go:141] libmachine: (functional-558069) Calling .GetSSHUsername
I0829 18:23:28.404929   31065 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13071/.minikube/machines/functional-558069/id_rsa Username:docker}
I0829 18:23:28.515438   31065 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0829 18:23:28.564676   31065 main.go:141] libmachine: Making call to close driver server
I0829 18:23:28.564695   31065 main.go:141] libmachine: (functional-558069) Calling .Close
I0829 18:23:28.565021   31065 main.go:141] libmachine: Successfully made call to close driver server
I0829 18:23:28.565045   31065 main.go:141] libmachine: Making call to close connection to plugin binary
I0829 18:23:28.565057   31065 main.go:141] libmachine: Making call to close driver server
I0829 18:23:28.565070   31065 main.go:141] libmachine: (functional-558069) Calling .Close
I0829 18:23:28.565096   31065 main.go:141] libmachine: (functional-558069) DBG | Closing plugin on server side
I0829 18:23:28.565286   31065 main.go:141] libmachine: Successfully made call to close driver server
I0829 18:23:28.565298   31065 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-558069 image ls --format json --alsologtostderr:
[{"id":"124a9ba0ab4ee4c91ccf9fd14beee22cd76ed0fd617accaa5ae573a286570664","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-558069"],"size":"30"},{"id":"604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size":"94200000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"88400000"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"59800000"},{"id":"6e38f40d628db3002f5617342
c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-558069"],"size":"4940000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"67400000"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":[
],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"148000000"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"736000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"],"size":"91500000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-558069 image ls --format json --alsologtostderr:
I0829 18:23:28.133802   31010 out.go:345] Setting OutFile to fd 1 ...
I0829 18:23:28.133905   31010 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 18:23:28.133915   31010 out.go:358] Setting ErrFile to fd 2...
I0829 18:23:28.133919   31010 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 18:23:28.134139   31010 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-13071/.minikube/bin
I0829 18:23:28.134700   31010 config.go:182] Loaded profile config "functional-558069": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0829 18:23:28.134789   31010 config.go:182] Loaded profile config "functional-558069": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0829 18:23:28.135143   31010 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0829 18:23:28.135181   31010 main.go:141] libmachine: Launching plugin server for driver kvm2
I0829 18:23:28.150469   31010 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38977
I0829 18:23:28.150879   31010 main.go:141] libmachine: () Calling .GetVersion
I0829 18:23:28.151524   31010 main.go:141] libmachine: Using API Version  1
I0829 18:23:28.151539   31010 main.go:141] libmachine: () Calling .SetConfigRaw
I0829 18:23:28.151960   31010 main.go:141] libmachine: () Calling .GetMachineName
I0829 18:23:28.152176   31010 main.go:141] libmachine: (functional-558069) Calling .GetState
I0829 18:23:28.154222   31010 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0829 18:23:28.154268   31010 main.go:141] libmachine: Launching plugin server for driver kvm2
I0829 18:23:28.169742   31010 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37533
I0829 18:23:28.170273   31010 main.go:141] libmachine: () Calling .GetVersion
I0829 18:23:28.170837   31010 main.go:141] libmachine: Using API Version  1
I0829 18:23:28.170862   31010 main.go:141] libmachine: () Calling .SetConfigRaw
I0829 18:23:28.171242   31010 main.go:141] libmachine: () Calling .GetMachineName
I0829 18:23:28.171488   31010 main.go:141] libmachine: (functional-558069) Calling .DriverName
I0829 18:23:28.171764   31010 ssh_runner.go:195] Run: systemctl --version
I0829 18:23:28.171790   31010 main.go:141] libmachine: (functional-558069) Calling .GetSSHHostname
I0829 18:23:28.174576   31010 main.go:141] libmachine: (functional-558069) DBG | domain functional-558069 has defined MAC address 52:54:00:dd:e7:ab in network mk-functional-558069
I0829 18:23:28.174998   31010 main.go:141] libmachine: (functional-558069) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:e7:ab", ip: ""} in network mk-functional-558069: {Iface:virbr1 ExpiryTime:2024-08-29 19:20:36 +0000 UTC Type:0 Mac:52:54:00:dd:e7:ab Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:functional-558069 Clientid:01:52:54:00:dd:e7:ab}
I0829 18:23:28.175034   31010 main.go:141] libmachine: (functional-558069) DBG | domain functional-558069 has defined IP address 192.168.39.97 and MAC address 52:54:00:dd:e7:ab in network mk-functional-558069
I0829 18:23:28.175181   31010 main.go:141] libmachine: (functional-558069) Calling .GetSSHPort
I0829 18:23:28.175358   31010 main.go:141] libmachine: (functional-558069) Calling .GetSSHKeyPath
I0829 18:23:28.175514   31010 main.go:141] libmachine: (functional-558069) Calling .GetSSHUsername
I0829 18:23:28.175639   31010 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13071/.minikube/machines/functional-558069/id_rsa Username:docker}
I0829 18:23:28.267293   31010 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0829 18:23:28.311117   31010 main.go:141] libmachine: Making call to close driver server
I0829 18:23:28.311136   31010 main.go:141] libmachine: (functional-558069) Calling .Close
I0829 18:23:28.311426   31010 main.go:141] libmachine: Successfully made call to close driver server
I0829 18:23:28.311464   31010 main.go:141] libmachine: Making call to close connection to plugin binary
I0829 18:23:28.311481   31010 main.go:141] libmachine: Making call to close driver server
I0829 18:23:28.311501   31010 main.go:141] libmachine: (functional-558069) Calling .Close
I0829 18:23:28.311810   31010 main.go:141] libmachine: Successfully made call to close driver server
I0829 18:23:28.311827   31010 main.go:141] libmachine: Making call to close connection to plugin binary
I0829 18:23:28.311826   31010 main.go:141] libmachine: (functional-558069) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-558069 image ls --format yaml --alsologtostderr:
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "94200000"
- id: 1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "67400000"
- id: 045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "88400000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-558069
size: "4940000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "91500000"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "736000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 124a9ba0ab4ee4c91ccf9fd14beee22cd76ed0fd617accaa5ae573a286570664
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-558069
size: "30"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "148000000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "59800000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-558069 image ls --format yaml --alsologtostderr:
I0829 18:23:27.903251   30962 out.go:345] Setting OutFile to fd 1 ...
I0829 18:23:27.903376   30962 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 18:23:27.903403   30962 out.go:358] Setting ErrFile to fd 2...
I0829 18:23:27.903411   30962 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 18:23:27.903612   30962 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-13071/.minikube/bin
I0829 18:23:27.904136   30962 config.go:182] Loaded profile config "functional-558069": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0829 18:23:27.904227   30962 config.go:182] Loaded profile config "functional-558069": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0829 18:23:27.904594   30962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0829 18:23:27.904646   30962 main.go:141] libmachine: Launching plugin server for driver kvm2
I0829 18:23:27.919723   30962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35689
I0829 18:23:27.920227   30962 main.go:141] libmachine: () Calling .GetVersion
I0829 18:23:27.920830   30962 main.go:141] libmachine: Using API Version  1
I0829 18:23:27.920849   30962 main.go:141] libmachine: () Calling .SetConfigRaw
I0829 18:23:27.921250   30962 main.go:141] libmachine: () Calling .GetMachineName
I0829 18:23:27.921489   30962 main.go:141] libmachine: (functional-558069) Calling .GetState
I0829 18:23:27.923315   30962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0829 18:23:27.923357   30962 main.go:141] libmachine: Launching plugin server for driver kvm2
I0829 18:23:27.938028   30962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37139
I0829 18:23:27.938486   30962 main.go:141] libmachine: () Calling .GetVersion
I0829 18:23:27.938938   30962 main.go:141] libmachine: Using API Version  1
I0829 18:23:27.938957   30962 main.go:141] libmachine: () Calling .SetConfigRaw
I0829 18:23:27.939247   30962 main.go:141] libmachine: () Calling .GetMachineName
I0829 18:23:27.939425   30962 main.go:141] libmachine: (functional-558069) Calling .DriverName
I0829 18:23:27.939597   30962 ssh_runner.go:195] Run: systemctl --version
I0829 18:23:27.939624   30962 main.go:141] libmachine: (functional-558069) Calling .GetSSHHostname
I0829 18:23:27.942311   30962 main.go:141] libmachine: (functional-558069) DBG | domain functional-558069 has defined MAC address 52:54:00:dd:e7:ab in network mk-functional-558069
I0829 18:23:27.942746   30962 main.go:141] libmachine: (functional-558069) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:e7:ab", ip: ""} in network mk-functional-558069: {Iface:virbr1 ExpiryTime:2024-08-29 19:20:36 +0000 UTC Type:0 Mac:52:54:00:dd:e7:ab Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:functional-558069 Clientid:01:52:54:00:dd:e7:ab}
I0829 18:23:27.942793   30962 main.go:141] libmachine: (functional-558069) DBG | domain functional-558069 has defined IP address 192.168.39.97 and MAC address 52:54:00:dd:e7:ab in network mk-functional-558069
I0829 18:23:27.942833   30962 main.go:141] libmachine: (functional-558069) Calling .GetSSHPort
I0829 18:23:27.942995   30962 main.go:141] libmachine: (functional-558069) Calling .GetSSHKeyPath
I0829 18:23:27.943138   30962 main.go:141] libmachine: (functional-558069) Calling .GetSSHUsername
I0829 18:23:27.943269   30962 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13071/.minikube/machines/functional-558069/id_rsa Username:docker}
I0829 18:23:28.035331   30962 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0829 18:23:28.077805   30962 main.go:141] libmachine: Making call to close driver server
I0829 18:23:28.077822   30962 main.go:141] libmachine: (functional-558069) Calling .Close
I0829 18:23:28.078070   30962 main.go:141] libmachine: Successfully made call to close driver server
I0829 18:23:28.078098   30962 main.go:141] libmachine: Making call to close connection to plugin binary
I0829 18:23:28.078097   30962 main.go:141] libmachine: (functional-558069) DBG | Closing plugin on server side
I0829 18:23:28.078107   30962 main.go:141] libmachine: Making call to close driver server
I0829 18:23:28.078115   30962 main.go:141] libmachine: (functional-558069) Calling .Close
I0829 18:23:28.078323   30962 main.go:141] libmachine: Successfully made call to close driver server
I0829 18:23:28.078335   30962 main.go:141] libmachine: Making call to close connection to plugin binary
I0829 18:23:28.078384   30962 main.go:141] libmachine: (functional-558069) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-558069 ssh pgrep buildkitd: exit status 1 (203.313102ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 image build -t localhost/my-image:functional-558069 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-558069 image build -t localhost/my-image:functional-558069 testdata/build --alsologtostderr: (3.24776114s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-558069 image build -t localhost/my-image:functional-558069 testdata/build --alsologtostderr:
I0829 18:23:28.308987   31052 out.go:345] Setting OutFile to fd 1 ...
I0829 18:23:28.309285   31052 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 18:23:28.309295   31052 out.go:358] Setting ErrFile to fd 2...
I0829 18:23:28.309300   31052 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 18:23:28.309503   31052 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-13071/.minikube/bin
I0829 18:23:28.310087   31052 config.go:182] Loaded profile config "functional-558069": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0829 18:23:28.310688   31052 config.go:182] Loaded profile config "functional-558069": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0829 18:23:28.311221   31052 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0829 18:23:28.311257   31052 main.go:141] libmachine: Launching plugin server for driver kvm2
I0829 18:23:28.327451   31052 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44851
I0829 18:23:28.328008   31052 main.go:141] libmachine: () Calling .GetVersion
I0829 18:23:28.328612   31052 main.go:141] libmachine: Using API Version  1
I0829 18:23:28.328638   31052 main.go:141] libmachine: () Calling .SetConfigRaw
I0829 18:23:28.329028   31052 main.go:141] libmachine: () Calling .GetMachineName
I0829 18:23:28.329261   31052 main.go:141] libmachine: (functional-558069) Calling .GetState
I0829 18:23:28.331495   31052 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0829 18:23:28.331540   31052 main.go:141] libmachine: Launching plugin server for driver kvm2
I0829 18:23:28.348035   31052 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41327
I0829 18:23:28.348541   31052 main.go:141] libmachine: () Calling .GetVersion
I0829 18:23:28.349218   31052 main.go:141] libmachine: Using API Version  1
I0829 18:23:28.349240   31052 main.go:141] libmachine: () Calling .SetConfigRaw
I0829 18:23:28.349635   31052 main.go:141] libmachine: () Calling .GetMachineName
I0829 18:23:28.349816   31052 main.go:141] libmachine: (functional-558069) Calling .DriverName
I0829 18:23:28.350049   31052 ssh_runner.go:195] Run: systemctl --version
I0829 18:23:28.350083   31052 main.go:141] libmachine: (functional-558069) Calling .GetSSHHostname
I0829 18:23:28.353038   31052 main.go:141] libmachine: (functional-558069) DBG | domain functional-558069 has defined MAC address 52:54:00:dd:e7:ab in network mk-functional-558069
I0829 18:23:28.354322   31052 main.go:141] libmachine: (functional-558069) Calling .GetSSHPort
I0829 18:23:28.354357   31052 main.go:141] libmachine: (functional-558069) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:e7:ab", ip: ""} in network mk-functional-558069: {Iface:virbr1 ExpiryTime:2024-08-29 19:20:36 +0000 UTC Type:0 Mac:52:54:00:dd:e7:ab Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:functional-558069 Clientid:01:52:54:00:dd:e7:ab}
I0829 18:23:28.354372   31052 main.go:141] libmachine: (functional-558069) DBG | domain functional-558069 has defined IP address 192.168.39.97 and MAC address 52:54:00:dd:e7:ab in network mk-functional-558069
I0829 18:23:28.354535   31052 main.go:141] libmachine: (functional-558069) Calling .GetSSHKeyPath
I0829 18:23:28.354680   31052 main.go:141] libmachine: (functional-558069) Calling .GetSSHUsername
I0829 18:23:28.354827   31052 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13071/.minikube/machines/functional-558069/id_rsa Username:docker}
I0829 18:23:28.448748   31052 build_images.go:161] Building image from path: /tmp/build.190847791.tar
I0829 18:23:28.448814   31052 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0829 18:23:28.466493   31052 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.190847791.tar
I0829 18:23:28.472953   31052 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.190847791.tar: stat -c "%s %y" /var/lib/minikube/build/build.190847791.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.190847791.tar': No such file or directory
I0829 18:23:28.472989   31052 ssh_runner.go:362] scp /tmp/build.190847791.tar --> /var/lib/minikube/build/build.190847791.tar (3072 bytes)
I0829 18:23:28.522319   31052 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.190847791
I0829 18:23:28.550333   31052 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.190847791 -xf /var/lib/minikube/build/build.190847791.tar
I0829 18:23:28.568742   31052 docker.go:360] Building image: /var/lib/minikube/build/build.190847791
I0829 18:23:28.568809   31052 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-558069 /var/lib/minikube/build/build.190847791
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.0s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.5s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.6s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 writing image sha256:42141a36dae7f7893250421709eaad684e4a970dc014000661e5cc41e6c13c5f
#8 writing image sha256:42141a36dae7f7893250421709eaad684e4a970dc014000661e5cc41e6c13c5f done
#8 naming to localhost/my-image:functional-558069 done
#8 DONE 0.1s
I0829 18:23:31.464461   31052 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-558069 /var/lib/minikube/build/build.190847791: (2.895621392s)
I0829 18:23:31.464553   31052 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.190847791
I0829 18:23:31.484709   31052 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.190847791.tar
I0829 18:23:31.508954   31052 build_images.go:217] Built localhost/my-image:functional-558069 from /tmp/build.190847791.tar
I0829 18:23:31.508999   31052 build_images.go:133] succeeded building to: functional-558069
I0829 18:23:31.509005   31052 build_images.go:134] failed building to: 
I0829 18:23:31.509042   31052 main.go:141] libmachine: Making call to close driver server
I0829 18:23:31.509081   31052 main.go:141] libmachine: (functional-558069) Calling .Close
I0829 18:23:31.509375   31052 main.go:141] libmachine: Successfully made call to close driver server
I0829 18:23:31.509393   31052 main.go:141] libmachine: (functional-558069) DBG | Closing plugin on server side
I0829 18:23:31.509395   31052 main.go:141] libmachine: Making call to close connection to plugin binary
I0829 18:23:31.509430   31052 main.go:141] libmachine: Making call to close driver server
I0829 18:23:31.509438   31052 main.go:141] libmachine: (functional-558069) Calling .Close
I0829 18:23:31.509676   31052 main.go:141] libmachine: Successfully made call to close driver server
I0829 18:23:31.509689   31052 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.64624982s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-558069
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 image load --daemon kicbase/echo-server:functional-558069 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-558069 image load --daemon kicbase/echo-server:functional-558069 --alsologtostderr: (1.028306776s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "313.651375ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "51.327975ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "230.981374ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "50.576684ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (22.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-558069 /tmp/TestFunctionalparallelMountCmdany-port416456431/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1724955777191339046" to /tmp/TestFunctionalparallelMountCmdany-port416456431/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1724955777191339046" to /tmp/TestFunctionalparallelMountCmdany-port416456431/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1724955777191339046" to /tmp/TestFunctionalparallelMountCmdany-port416456431/001/test-1724955777191339046
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-558069 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (231.399434ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 29 18:22 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 29 18:22 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 29 18:22 test-1724955777191339046
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 ssh cat /mount-9p/test-1724955777191339046
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-558069 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [a748478a-9178-4964-8ea1-60ba2f8eac18] Pending
helpers_test.go:344: "busybox-mount" [a748478a-9178-4964-8ea1-60ba2f8eac18] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [a748478a-9178-4964-8ea1-60ba2f8eac18] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [a748478a-9178-4964-8ea1-60ba2f8eac18] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 20.00533508s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-558069 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-558069 /tmp/TestFunctionalparallelMountCmdany-port416456431/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (22.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 image load --daemon kicbase/echo-server:functional-558069 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-558069
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 image load --daemon kicbase/echo-server:functional-558069 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 image save kicbase/echo-server:functional-558069 /home/jenkins/workspace/KVM_Linux_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 image rm kicbase/echo-server:functional-558069 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 image load /home/jenkins/workspace/KVM_Linux_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-558069
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 image save --daemon kicbase/echo-server:functional-558069 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-558069
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 service list -o json
functional_test.go:1494: Took "825.858736ms" to run "out/minikube-linux-amd64 -p functional-558069 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.97:31050
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.97:31050
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-558069 /tmp/TestFunctionalparallelMountCmdspecific-port3951133447/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-558069 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (183.020769ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-558069 /tmp/TestFunctionalparallelMountCmdspecific-port3951133447/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-558069 ssh "sudo umount -f /mount-9p": exit status 1 (196.713664ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-558069 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-558069 /tmp/TestFunctionalparallelMountCmdspecific-port3951133447/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-558069 /tmp/TestFunctionalparallelMountCmdVerifyCleanup947947301/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-558069 /tmp/TestFunctionalparallelMountCmdVerifyCleanup947947301/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-558069 /tmp/TestFunctionalparallelMountCmdVerifyCleanup947947301/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-558069 ssh "findmnt -T" /mount1: exit status 1 (270.322031ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-558069 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-558069 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-558069 /tmp/TestFunctionalparallelMountCmdVerifyCleanup947947301/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-558069 /tmp/TestFunctionalparallelMountCmdVerifyCleanup947947301/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-558069 /tmp/TestFunctionalparallelMountCmdVerifyCleanup947947301/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.44s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-558069
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-558069
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-558069
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestGvisorAddon (267.75s)

                                                
                                                
=== RUN   TestGvisorAddon
=== PAUSE TestGvisorAddon

                                                
                                                

                                                
                                                
=== CONT  TestGvisorAddon
gvisor_addon_test.go:52: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-693858 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:52: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-693858 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (2m3.72380095s)
gvisor_addon_test.go:58: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-693858 cache add gcr.io/k8s-minikube/gvisor-addon:2
gvisor_addon_test.go:58: (dbg) Done: out/minikube-linux-amd64 -p gvisor-693858 cache add gcr.io/k8s-minikube/gvisor-addon:2: (22.842566037s)
gvisor_addon_test.go:63: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-693858 addons enable gvisor
gvisor_addon_test.go:63: (dbg) Done: out/minikube-linux-amd64 -p gvisor-693858 addons enable gvisor: (4.173690395s)
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [576c4d21-a97a-4291-a5d5-abeb2965ee35] Running
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 6.004023642s
gvisor_addon_test.go:73: (dbg) Run:  kubectl --context gvisor-693858 replace --force -f testdata/nginx-gvisor.yaml
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [c7f46df0-4990-46c8-809f-59e1f9893ed9] Pending
helpers_test.go:344: "nginx-gvisor" [c7f46df0-4990-46c8-809f-59e1f9893ed9] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-gvisor" [c7f46df0-4990-46c8-809f-59e1f9893ed9] Running
E0829 19:04:11.839858   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/addons-661794/client.crt: no such file or directory" logger="UnhandledError"
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 30.003864139s
gvisor_addon_test.go:83: (dbg) Run:  out/minikube-linux-amd64 stop -p gvisor-693858
gvisor_addon_test.go:83: (dbg) Done: out/minikube-linux-amd64 stop -p gvisor-693858: (6.828540392s)
gvisor_addon_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-693858 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-693858 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (1m2.034854071s)
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [576c4d21-a97a-4291-a5d5-abeb2965ee35] Running / Ready:ContainersNotReady (containers with unready status: [gvisor]) / ContainersReady:ContainersNotReady (containers with unready status: [gvisor])
helpers_test.go:344: "gvisor" [576c4d21-a97a-4291-a5d5-abeb2965ee35] Running
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 6.003935039s
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [c7f46df0-4990-46c8-809f-59e1f9893ed9] Running / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 5.004726527s
helpers_test.go:175: Cleaning up "gvisor-693858" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p gvisor-693858
--- PASS: TestGvisorAddon (267.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (219.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-355145 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2 
E0829 18:24:11.840105   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/addons-661794/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:24:11.847021   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/addons-661794/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:24:11.858409   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/addons-661794/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:24:11.879864   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/addons-661794/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:24:11.921324   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/addons-661794/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:24:12.002808   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/addons-661794/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:24:12.165040   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/addons-661794/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:24:12.487205   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/addons-661794/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:24:13.128882   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/addons-661794/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:24:14.411122   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/addons-661794/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:24:16.973458   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/addons-661794/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:24:22.094929   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/addons-661794/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:24:32.336994   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/addons-661794/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:24:52.819014   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/addons-661794/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:25:33.780761   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/addons-661794/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:26:55.702298   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/addons-661794/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-355145 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2 : (3m39.021523973s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-355145 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (219.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-355145 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-355145 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-355145 -- rollout status deployment/busybox: (2.778795095s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-355145 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-355145 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-355145 -- exec busybox-7dff88458-db4jv -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-355145 -- exec busybox-7dff88458-fp8jz -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-355145 -- exec busybox-7dff88458-jqt4r -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-355145 -- exec busybox-7dff88458-db4jv -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-355145 -- exec busybox-7dff88458-fp8jz -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-355145 -- exec busybox-7dff88458-jqt4r -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-355145 -- exec busybox-7dff88458-db4jv -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-355145 -- exec busybox-7dff88458-fp8jz -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-355145 -- exec busybox-7dff88458-jqt4r -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-355145 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-355145 -- exec busybox-7dff88458-db4jv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-355145 -- exec busybox-7dff88458-db4jv -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-355145 -- exec busybox-7dff88458-fp8jz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-355145 -- exec busybox-7dff88458-fp8jz -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-355145 -- exec busybox-7dff88458-jqt4r -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-355145 -- exec busybox-7dff88458-jqt4r -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (62.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-355145 -v=7 --alsologtostderr
E0829 18:27:54.055141   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/functional-558069/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:27:54.061584   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/functional-558069/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:27:54.073041   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/functional-558069/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:27:54.094542   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/functional-558069/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:27:54.135973   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/functional-558069/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:27:54.217440   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/functional-558069/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:27:54.379030   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/functional-558069/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:27:54.700650   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/functional-558069/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:27:55.342659   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/functional-558069/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:27:56.624278   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/functional-558069/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:27:59.186248   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/functional-558069/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:28:04.308586   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/functional-558069/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:28:14.550618   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/functional-558069/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:28:35.032894   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/functional-558069/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-355145 -v=7 --alsologtostderr: (1m1.598220403s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-355145 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (62.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-355145 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-355145 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-355145 cp testdata/cp-test.txt ha-355145:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-355145 ssh -n ha-355145 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-355145 cp ha-355145:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3474442025/001/cp-test_ha-355145.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-355145 ssh -n ha-355145 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-355145 cp ha-355145:/home/docker/cp-test.txt ha-355145-m02:/home/docker/cp-test_ha-355145_ha-355145-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-355145 ssh -n ha-355145 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-355145 ssh -n ha-355145-m02 "sudo cat /home/docker/cp-test_ha-355145_ha-355145-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-355145 cp ha-355145:/home/docker/cp-test.txt ha-355145-m03:/home/docker/cp-test_ha-355145_ha-355145-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-355145 ssh -n ha-355145 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-355145 ssh -n ha-355145-m03 "sudo cat /home/docker/cp-test_ha-355145_ha-355145-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-355145 cp ha-355145:/home/docker/cp-test.txt ha-355145-m04:/home/docker/cp-test_ha-355145_ha-355145-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-355145 ssh -n ha-355145 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-355145 ssh -n ha-355145-m04 "sudo cat /home/docker/cp-test_ha-355145_ha-355145-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-355145 cp testdata/cp-test.txt ha-355145-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-355145 ssh -n ha-355145-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-355145 cp ha-355145-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3474442025/001/cp-test_ha-355145-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-355145 ssh -n ha-355145-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-355145 cp ha-355145-m02:/home/docker/cp-test.txt ha-355145:/home/docker/cp-test_ha-355145-m02_ha-355145.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-355145 ssh -n ha-355145-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-355145 ssh -n ha-355145 "sudo cat /home/docker/cp-test_ha-355145-m02_ha-355145.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-355145 cp ha-355145-m02:/home/docker/cp-test.txt ha-355145-m03:/home/docker/cp-test_ha-355145-m02_ha-355145-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-355145 ssh -n ha-355145-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-355145 ssh -n ha-355145-m03 "sudo cat /home/docker/cp-test_ha-355145-m02_ha-355145-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-355145 cp ha-355145-m02:/home/docker/cp-test.txt ha-355145-m04:/home/docker/cp-test_ha-355145-m02_ha-355145-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-355145 ssh -n ha-355145-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-355145 ssh -n ha-355145-m04 "sudo cat /home/docker/cp-test_ha-355145-m02_ha-355145-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-355145 cp testdata/cp-test.txt ha-355145-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-355145 ssh -n ha-355145-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-355145 cp ha-355145-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3474442025/001/cp-test_ha-355145-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-355145 ssh -n ha-355145-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-355145 cp ha-355145-m03:/home/docker/cp-test.txt ha-355145:/home/docker/cp-test_ha-355145-m03_ha-355145.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-355145 ssh -n ha-355145-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-355145 ssh -n ha-355145 "sudo cat /home/docker/cp-test_ha-355145-m03_ha-355145.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-355145 cp ha-355145-m03:/home/docker/cp-test.txt ha-355145-m02:/home/docker/cp-test_ha-355145-m03_ha-355145-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-355145 ssh -n ha-355145-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-355145 ssh -n ha-355145-m02 "sudo cat /home/docker/cp-test_ha-355145-m03_ha-355145-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-355145 cp ha-355145-m03:/home/docker/cp-test.txt ha-355145-m04:/home/docker/cp-test_ha-355145-m03_ha-355145-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-355145 ssh -n ha-355145-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-355145 ssh -n ha-355145-m04 "sudo cat /home/docker/cp-test_ha-355145-m03_ha-355145-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-355145 cp testdata/cp-test.txt ha-355145-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-355145 ssh -n ha-355145-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-355145 cp ha-355145-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3474442025/001/cp-test_ha-355145-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-355145 ssh -n ha-355145-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-355145 cp ha-355145-m04:/home/docker/cp-test.txt ha-355145:/home/docker/cp-test_ha-355145-m04_ha-355145.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-355145 ssh -n ha-355145-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-355145 ssh -n ha-355145 "sudo cat /home/docker/cp-test_ha-355145-m04_ha-355145.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-355145 cp ha-355145-m04:/home/docker/cp-test.txt ha-355145-m02:/home/docker/cp-test_ha-355145-m04_ha-355145-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-355145 ssh -n ha-355145-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-355145 ssh -n ha-355145-m02 "sudo cat /home/docker/cp-test_ha-355145-m04_ha-355145-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-355145 cp ha-355145-m04:/home/docker/cp-test.txt ha-355145-m03:/home/docker/cp-test_ha-355145-m04_ha-355145-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-355145 ssh -n ha-355145-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-355145 ssh -n ha-355145-m03 "sudo cat /home/docker/cp-test_ha-355145-m04_ha-355145-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-355145 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-355145 node stop m02 -v=7 --alsologtostderr: (12.61991344s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-355145 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-355145 status -v=7 --alsologtostderr: exit status 7 (614.362468ms)

                                                
                                                
-- stdout --
	ha-355145
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-355145-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-355145-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-355145-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 18:29:03.469461   35603 out.go:345] Setting OutFile to fd 1 ...
	I0829 18:29:03.469723   35603 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:29:03.469733   35603 out.go:358] Setting ErrFile to fd 2...
	I0829 18:29:03.469738   35603 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:29:03.469969   35603 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-13071/.minikube/bin
	I0829 18:29:03.470217   35603 out.go:352] Setting JSON to false
	I0829 18:29:03.470252   35603 mustload.go:65] Loading cluster: ha-355145
	I0829 18:29:03.470393   35603 notify.go:220] Checking for updates...
	I0829 18:29:03.470777   35603 config.go:182] Loaded profile config "ha-355145": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 18:29:03.470797   35603 status.go:255] checking status of ha-355145 ...
	I0829 18:29:03.471309   35603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0829 18:29:03.471370   35603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:29:03.492572   35603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42111
	I0829 18:29:03.493236   35603 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:29:03.493930   35603 main.go:141] libmachine: Using API Version  1
	I0829 18:29:03.493955   35603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:29:03.494382   35603 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:29:03.494623   35603 main.go:141] libmachine: (ha-355145) Calling .GetState
	I0829 18:29:03.496876   35603 status.go:330] ha-355145 host status = "Running" (err=<nil>)
	I0829 18:29:03.496896   35603 host.go:66] Checking if "ha-355145" exists ...
	I0829 18:29:03.497232   35603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0829 18:29:03.497274   35603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:29:03.514069   35603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45925
	I0829 18:29:03.514558   35603 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:29:03.515204   35603 main.go:141] libmachine: Using API Version  1
	I0829 18:29:03.515238   35603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:29:03.515713   35603 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:29:03.515921   35603 main.go:141] libmachine: (ha-355145) Calling .GetIP
	I0829 18:29:03.519352   35603 main.go:141] libmachine: (ha-355145) DBG | domain ha-355145 has defined MAC address 52:54:00:3b:b8:50 in network mk-ha-355145
	I0829 18:29:03.519868   35603 main.go:141] libmachine: (ha-355145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:b8:50", ip: ""} in network mk-ha-355145: {Iface:virbr1 ExpiryTime:2024-08-29 19:24:03 +0000 UTC Type:0 Mac:52:54:00:3b:b8:50 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:ha-355145 Clientid:01:52:54:00:3b:b8:50}
	I0829 18:29:03.519910   35603 main.go:141] libmachine: (ha-355145) DBG | domain ha-355145 has defined IP address 192.168.39.199 and MAC address 52:54:00:3b:b8:50 in network mk-ha-355145
	I0829 18:29:03.520046   35603 host.go:66] Checking if "ha-355145" exists ...
	I0829 18:29:03.520348   35603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0829 18:29:03.520396   35603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:29:03.535856   35603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44467
	I0829 18:29:03.536407   35603 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:29:03.536991   35603 main.go:141] libmachine: Using API Version  1
	I0829 18:29:03.537015   35603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:29:03.537354   35603 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:29:03.537537   35603 main.go:141] libmachine: (ha-355145) Calling .DriverName
	I0829 18:29:03.537743   35603 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 18:29:03.537766   35603 main.go:141] libmachine: (ha-355145) Calling .GetSSHHostname
	I0829 18:29:03.540690   35603 main.go:141] libmachine: (ha-355145) DBG | domain ha-355145 has defined MAC address 52:54:00:3b:b8:50 in network mk-ha-355145
	I0829 18:29:03.541073   35603 main.go:141] libmachine: (ha-355145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:b8:50", ip: ""} in network mk-ha-355145: {Iface:virbr1 ExpiryTime:2024-08-29 19:24:03 +0000 UTC Type:0 Mac:52:54:00:3b:b8:50 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:ha-355145 Clientid:01:52:54:00:3b:b8:50}
	I0829 18:29:03.541094   35603 main.go:141] libmachine: (ha-355145) DBG | domain ha-355145 has defined IP address 192.168.39.199 and MAC address 52:54:00:3b:b8:50 in network mk-ha-355145
	I0829 18:29:03.541226   35603 main.go:141] libmachine: (ha-355145) Calling .GetSSHPort
	I0829 18:29:03.541419   35603 main.go:141] libmachine: (ha-355145) Calling .GetSSHKeyPath
	I0829 18:29:03.541571   35603 main.go:141] libmachine: (ha-355145) Calling .GetSSHUsername
	I0829 18:29:03.541716   35603 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13071/.minikube/machines/ha-355145/id_rsa Username:docker}
	I0829 18:29:03.617917   35603 ssh_runner.go:195] Run: systemctl --version
	I0829 18:29:03.625044   35603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 18:29:03.641528   35603 kubeconfig.go:125] found "ha-355145" server: "https://192.168.39.254:8443"
	I0829 18:29:03.641562   35603 api_server.go:166] Checking apiserver status ...
	I0829 18:29:03.641593   35603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 18:29:03.655906   35603 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1943/cgroup
	W0829 18:29:03.665518   35603 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1943/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0829 18:29:03.665574   35603 ssh_runner.go:195] Run: ls
	I0829 18:29:03.670291   35603 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0829 18:29:03.674587   35603 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0829 18:29:03.674609   35603 status.go:422] ha-355145 apiserver status = Running (err=<nil>)
	I0829 18:29:03.674618   35603 status.go:257] ha-355145 status: &{Name:ha-355145 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 18:29:03.674633   35603 status.go:255] checking status of ha-355145-m02 ...
	I0829 18:29:03.674911   35603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0829 18:29:03.674941   35603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:29:03.689466   35603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46037
	I0829 18:29:03.690018   35603 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:29:03.690520   35603 main.go:141] libmachine: Using API Version  1
	I0829 18:29:03.690546   35603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:29:03.690845   35603 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:29:03.691034   35603 main.go:141] libmachine: (ha-355145-m02) Calling .GetState
	I0829 18:29:03.692686   35603 status.go:330] ha-355145-m02 host status = "Stopped" (err=<nil>)
	I0829 18:29:03.692702   35603 status.go:343] host is not running, skipping remaining checks
	I0829 18:29:03.692710   35603 status.go:257] ha-355145-m02 status: &{Name:ha-355145-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 18:29:03.692735   35603 status.go:255] checking status of ha-355145-m03 ...
	I0829 18:29:03.693020   35603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0829 18:29:03.693052   35603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:29:03.707532   35603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44285
	I0829 18:29:03.707901   35603 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:29:03.708383   35603 main.go:141] libmachine: Using API Version  1
	I0829 18:29:03.708402   35603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:29:03.708705   35603 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:29:03.708921   35603 main.go:141] libmachine: (ha-355145-m03) Calling .GetState
	I0829 18:29:03.710507   35603 status.go:330] ha-355145-m03 host status = "Running" (err=<nil>)
	I0829 18:29:03.710521   35603 host.go:66] Checking if "ha-355145-m03" exists ...
	I0829 18:29:03.710831   35603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0829 18:29:03.710874   35603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:29:03.725160   35603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34549
	I0829 18:29:03.725587   35603 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:29:03.726066   35603 main.go:141] libmachine: Using API Version  1
	I0829 18:29:03.726086   35603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:29:03.726408   35603 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:29:03.726592   35603 main.go:141] libmachine: (ha-355145-m03) Calling .GetIP
	I0829 18:29:03.729272   35603 main.go:141] libmachine: (ha-355145-m03) DBG | domain ha-355145-m03 has defined MAC address 52:54:00:49:da:ef in network mk-ha-355145
	I0829 18:29:03.729685   35603 main.go:141] libmachine: (ha-355145-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:da:ef", ip: ""} in network mk-ha-355145: {Iface:virbr1 ExpiryTime:2024-08-29 19:26:16 +0000 UTC Type:0 Mac:52:54:00:49:da:ef Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-355145-m03 Clientid:01:52:54:00:49:da:ef}
	I0829 18:29:03.729717   35603 main.go:141] libmachine: (ha-355145-m03) DBG | domain ha-355145-m03 has defined IP address 192.168.39.146 and MAC address 52:54:00:49:da:ef in network mk-ha-355145
	I0829 18:29:03.729827   35603 host.go:66] Checking if "ha-355145-m03" exists ...
	I0829 18:29:03.730192   35603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0829 18:29:03.730226   35603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:29:03.745966   35603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42737
	I0829 18:29:03.746407   35603 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:29:03.746888   35603 main.go:141] libmachine: Using API Version  1
	I0829 18:29:03.746909   35603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:29:03.747208   35603 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:29:03.747403   35603 main.go:141] libmachine: (ha-355145-m03) Calling .DriverName
	I0829 18:29:03.747563   35603 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 18:29:03.747599   35603 main.go:141] libmachine: (ha-355145-m03) Calling .GetSSHHostname
	I0829 18:29:03.750271   35603 main.go:141] libmachine: (ha-355145-m03) DBG | domain ha-355145-m03 has defined MAC address 52:54:00:49:da:ef in network mk-ha-355145
	I0829 18:29:03.750645   35603 main.go:141] libmachine: (ha-355145-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:da:ef", ip: ""} in network mk-ha-355145: {Iface:virbr1 ExpiryTime:2024-08-29 19:26:16 +0000 UTC Type:0 Mac:52:54:00:49:da:ef Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-355145-m03 Clientid:01:52:54:00:49:da:ef}
	I0829 18:29:03.750664   35603 main.go:141] libmachine: (ha-355145-m03) DBG | domain ha-355145-m03 has defined IP address 192.168.39.146 and MAC address 52:54:00:49:da:ef in network mk-ha-355145
	I0829 18:29:03.750831   35603 main.go:141] libmachine: (ha-355145-m03) Calling .GetSSHPort
	I0829 18:29:03.751017   35603 main.go:141] libmachine: (ha-355145-m03) Calling .GetSSHKeyPath
	I0829 18:29:03.751145   35603 main.go:141] libmachine: (ha-355145-m03) Calling .GetSSHUsername
	I0829 18:29:03.751265   35603 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13071/.minikube/machines/ha-355145-m03/id_rsa Username:docker}
	I0829 18:29:03.833364   35603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 18:29:03.849220   35603 kubeconfig.go:125] found "ha-355145" server: "https://192.168.39.254:8443"
	I0829 18:29:03.849245   35603 api_server.go:166] Checking apiserver status ...
	I0829 18:29:03.849276   35603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 18:29:03.866036   35603 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1828/cgroup
	W0829 18:29:03.878775   35603 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1828/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0829 18:29:03.878822   35603 ssh_runner.go:195] Run: ls
	I0829 18:29:03.883565   35603 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0829 18:29:03.887719   35603 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0829 18:29:03.887744   35603 status.go:422] ha-355145-m03 apiserver status = Running (err=<nil>)
	I0829 18:29:03.887761   35603 status.go:257] ha-355145-m03 status: &{Name:ha-355145-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 18:29:03.887777   35603 status.go:255] checking status of ha-355145-m04 ...
	I0829 18:29:03.888118   35603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0829 18:29:03.888151   35603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:29:03.902986   35603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46297
	I0829 18:29:03.903448   35603 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:29:03.903937   35603 main.go:141] libmachine: Using API Version  1
	I0829 18:29:03.903955   35603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:29:03.904242   35603 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:29:03.904428   35603 main.go:141] libmachine: (ha-355145-m04) Calling .GetState
	I0829 18:29:03.906012   35603 status.go:330] ha-355145-m04 host status = "Running" (err=<nil>)
	I0829 18:29:03.906030   35603 host.go:66] Checking if "ha-355145-m04" exists ...
	I0829 18:29:03.906301   35603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0829 18:29:03.906333   35603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:29:03.921712   35603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33377
	I0829 18:29:03.922218   35603 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:29:03.922767   35603 main.go:141] libmachine: Using API Version  1
	I0829 18:29:03.922796   35603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:29:03.923118   35603 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:29:03.923320   35603 main.go:141] libmachine: (ha-355145-m04) Calling .GetIP
	I0829 18:29:03.926423   35603 main.go:141] libmachine: (ha-355145-m04) DBG | domain ha-355145-m04 has defined MAC address 52:54:00:7c:e9:66 in network mk-ha-355145
	I0829 18:29:03.926818   35603 main.go:141] libmachine: (ha-355145-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:e9:66", ip: ""} in network mk-ha-355145: {Iface:virbr1 ExpiryTime:2024-08-29 19:27:50 +0000 UTC Type:0 Mac:52:54:00:7c:e9:66 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-355145-m04 Clientid:01:52:54:00:7c:e9:66}
	I0829 18:29:03.926852   35603 main.go:141] libmachine: (ha-355145-m04) DBG | domain ha-355145-m04 has defined IP address 192.168.39.249 and MAC address 52:54:00:7c:e9:66 in network mk-ha-355145
	I0829 18:29:03.927019   35603 host.go:66] Checking if "ha-355145-m04" exists ...
	I0829 18:29:03.927324   35603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0829 18:29:03.927355   35603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:29:03.941743   35603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42049
	I0829 18:29:03.942152   35603 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:29:03.942559   35603 main.go:141] libmachine: Using API Version  1
	I0829 18:29:03.942581   35603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:29:03.942848   35603 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:29:03.943020   35603 main.go:141] libmachine: (ha-355145-m04) Calling .DriverName
	I0829 18:29:03.943179   35603 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 18:29:03.943194   35603 main.go:141] libmachine: (ha-355145-m04) Calling .GetSSHHostname
	I0829 18:29:03.945770   35603 main.go:141] libmachine: (ha-355145-m04) DBG | domain ha-355145-m04 has defined MAC address 52:54:00:7c:e9:66 in network mk-ha-355145
	I0829 18:29:03.946222   35603 main.go:141] libmachine: (ha-355145-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:e9:66", ip: ""} in network mk-ha-355145: {Iface:virbr1 ExpiryTime:2024-08-29 19:27:50 +0000 UTC Type:0 Mac:52:54:00:7c:e9:66 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-355145-m04 Clientid:01:52:54:00:7c:e9:66}
	I0829 18:29:03.946248   35603 main.go:141] libmachine: (ha-355145-m04) DBG | domain ha-355145-m04 has defined IP address 192.168.39.249 and MAC address 52:54:00:7c:e9:66 in network mk-ha-355145
	I0829 18:29:03.946416   35603 main.go:141] libmachine: (ha-355145-m04) Calling .GetSSHPort
	I0829 18:29:03.946583   35603 main.go:141] libmachine: (ha-355145-m04) Calling .GetSSHKeyPath
	I0829 18:29:03.946712   35603 main.go:141] libmachine: (ha-355145-m04) Calling .GetSSHUsername
	I0829 18:29:03.946846   35603 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13071/.minikube/machines/ha-355145-m04/id_rsa Username:docker}
	I0829 18:29:04.024704   35603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 18:29:04.038800   35603 status.go:257] ha-355145-m04 status: &{Name:ha-355145-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (44.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-355145 node start m02 -v=7 --alsologtostderr
E0829 18:29:11.839691   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/addons-661794/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:29:15.994587   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/functional-558069/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:29:39.544611   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/addons-661794/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p ha-355145 node start m02 -v=7 --alsologtostderr: (44.0892595s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-355145 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (44.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (247.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-355145 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-355145 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-linux-amd64 stop -p ha-355145 -v=7 --alsologtostderr: (40.68867303s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-355145 --wait=true -v=7 --alsologtostderr
E0829 18:30:37.916414   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/functional-558069/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:32:54.055140   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/functional-558069/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:33:21.758745   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/functional-558069/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-355145 --wait=true -v=7 --alsologtostderr: (3m26.98325986s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-355145
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (247.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (7.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-355145 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-355145 node delete m03 -v=7 --alsologtostderr: (6.42166558s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-355145 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (7.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (38.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-355145 stop -v=7 --alsologtostderr
E0829 18:34:11.839760   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/addons-661794/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-linux-amd64 -p ha-355145 stop -v=7 --alsologtostderr: (38.130852227s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-355145 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-355145 status -v=7 --alsologtostderr: exit status 7 (103.475894ms)

                                                
                                                
-- stdout --
	ha-355145
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-355145-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-355145-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 18:34:43.346788   38013 out.go:345] Setting OutFile to fd 1 ...
	I0829 18:34:43.347052   38013 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:34:43.347062   38013 out.go:358] Setting ErrFile to fd 2...
	I0829 18:34:43.347069   38013 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:34:43.347330   38013 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-13071/.minikube/bin
	I0829 18:34:43.347553   38013 out.go:352] Setting JSON to false
	I0829 18:34:43.347581   38013 mustload.go:65] Loading cluster: ha-355145
	I0829 18:34:43.347688   38013 notify.go:220] Checking for updates...
	I0829 18:34:43.347993   38013 config.go:182] Loaded profile config "ha-355145": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 18:34:43.348010   38013 status.go:255] checking status of ha-355145 ...
	I0829 18:34:43.348372   38013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0829 18:34:43.348421   38013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:34:43.368942   38013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33823
	I0829 18:34:43.369407   38013 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:34:43.370001   38013 main.go:141] libmachine: Using API Version  1
	I0829 18:34:43.370034   38013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:34:43.370369   38013 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:34:43.370566   38013 main.go:141] libmachine: (ha-355145) Calling .GetState
	I0829 18:34:43.372236   38013 status.go:330] ha-355145 host status = "Stopped" (err=<nil>)
	I0829 18:34:43.372251   38013 status.go:343] host is not running, skipping remaining checks
	I0829 18:34:43.372259   38013 status.go:257] ha-355145 status: &{Name:ha-355145 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 18:34:43.372294   38013 status.go:255] checking status of ha-355145-m02 ...
	I0829 18:34:43.372567   38013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0829 18:34:43.372606   38013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:34:43.387155   38013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33665
	I0829 18:34:43.387548   38013 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:34:43.388048   38013 main.go:141] libmachine: Using API Version  1
	I0829 18:34:43.388063   38013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:34:43.388388   38013 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:34:43.388593   38013 main.go:141] libmachine: (ha-355145-m02) Calling .GetState
	I0829 18:34:43.390266   38013 status.go:330] ha-355145-m02 host status = "Stopped" (err=<nil>)
	I0829 18:34:43.390284   38013 status.go:343] host is not running, skipping remaining checks
	I0829 18:34:43.390290   38013 status.go:257] ha-355145-m02 status: &{Name:ha-355145-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 18:34:43.390313   38013 status.go:255] checking status of ha-355145-m04 ...
	I0829 18:34:43.390587   38013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0829 18:34:43.390641   38013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:34:43.405171   38013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44477
	I0829 18:34:43.405619   38013 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:34:43.406173   38013 main.go:141] libmachine: Using API Version  1
	I0829 18:34:43.406190   38013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:34:43.406503   38013 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:34:43.406679   38013 main.go:141] libmachine: (ha-355145-m04) Calling .GetState
	I0829 18:34:43.408269   38013 status.go:330] ha-355145-m04 host status = "Stopped" (err=<nil>)
	I0829 18:34:43.408286   38013 status.go:343] host is not running, skipping remaining checks
	I0829 18:34:43.408295   38013 status.go:257] ha-355145-m04 status: &{Name:ha-355145-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (38.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (124.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-355145 --wait=true -v=7 --alsologtostderr --driver=kvm2 
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-355145 --wait=true -v=7 --alsologtostderr --driver=kvm2 : (2m4.069467478s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-355145 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (124.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (79.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-355145 --control-plane -v=7 --alsologtostderr
E0829 18:37:54.054471   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/functional-558069/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-355145 --control-plane -v=7 --alsologtostderr: (1m18.684137232s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-355145 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (79.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.53s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (46.22s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-954050 --driver=kvm2 
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-954050 --driver=kvm2 : (46.221998555s)
--- PASS: TestImageBuild/serial/Setup (46.22s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (2.11s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-954050
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-954050: (2.114420378s)
--- PASS: TestImageBuild/serial/NormalBuild (2.11s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.13s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-954050
image_test.go:99: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-954050: (1.126154159s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.13s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (1.03s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-954050
image_test.go:133: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-954050: (1.02774934s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (1.03s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.91s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-954050
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.91s)

                                                
                                    
x
+
TestJSONOutput/start/Command (59.69s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-846031 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 
E0829 18:39:11.839416   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/addons-661794/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-846031 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 : (59.688984642s)
--- PASS: TestJSONOutput/start/Command (59.69s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.58s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-846031 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.58s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.54s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-846031 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.54s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (12.61s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-846031 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-846031 --output=json --user=testUser: (12.60770809s)
--- PASS: TestJSONOutput/stop/Command (12.61s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-254614 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-254614 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (57.828335ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"405b95de-fcdd-4331-9662-b9b75410dbfe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-254614] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b799a59b-8beb-4a4d-9aae-8d513f22a515","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19531"}}
	{"specversion":"1.0","id":"95c953e0-cd6e-40aa-8b5b-282ec67ef861","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"9f0359c5-21c9-44aa-9499-d5ae3291bfd2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19531-13071/kubeconfig"}}
	{"specversion":"1.0","id":"41619872-a1d9-46ee-a37a-b9d5d0d874bf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-13071/.minikube"}}
	{"specversion":"1.0","id":"6e6df921-be1c-4c21-9dc6-95608fc682b7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"4c45a20a-ef4c-47d2-9505-adf1c6e45eee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"0f0d5e1f-33f5-40e0-b108-6b7dd817e9ca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-254614" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-254614
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (98.79s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-180097 --driver=kvm2 
E0829 18:40:34.906923   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/addons-661794/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-180097 --driver=kvm2 : (48.259066979s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-182619 --driver=kvm2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-182619 --driver=kvm2 : (47.716389193s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-180097
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-182619
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-182619" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-182619
helpers_test.go:175: Cleaning up "first-180097" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-180097
--- PASS: TestMinikubeProfile (98.79s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (27.87s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-953765 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-953765 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 : (26.874219891s)
--- PASS: TestMountStart/serial/StartWithMountFirst (27.87s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-953765 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-953765 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (28.23s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-971865 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-971865 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 : (27.224834923s)
--- PASS: TestMountStart/serial/StartWithMountSecond (28.23s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-971865 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-971865 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-953765 --alsologtostderr -v=5
E0829 18:42:54.054482   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/functional-558069/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-971865 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-971865 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-971865
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-971865: (2.273544906s)
--- PASS: TestMountStart/serial/Stop (2.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (24.13s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-971865
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-971865: (23.131044728s)
--- PASS: TestMountStart/serial/RestartStopped (24.13s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-971865 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-971865 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (127.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-087362 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 
E0829 18:44:11.839618   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/addons-661794/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:44:17.120942   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/functional-558069/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-087362 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 : (2m7.059636393s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-087362 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (127.45s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-087362 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-087362 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-087362 -- rollout status deployment/busybox: (3.186125138s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-087362 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-087362 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-087362 -- exec busybox-7dff88458-6p9h5 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-087362 -- exec busybox-7dff88458-c9zh8 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-087362 -- exec busybox-7dff88458-6p9h5 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-087362 -- exec busybox-7dff88458-c9zh8 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-087362 -- exec busybox-7dff88458-6p9h5 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-087362 -- exec busybox-7dff88458-c9zh8 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.72s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-087362 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-087362 -- exec busybox-7dff88458-6p9h5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-087362 -- exec busybox-7dff88458-6p9h5 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-087362 -- exec busybox-7dff88458-c9zh8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-087362 -- exec busybox-7dff88458-c9zh8 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.80s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (57.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-087362 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-087362 -v 3 --alsologtostderr: (56.751367699s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-087362 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (57.32s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-087362 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-087362 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-087362 cp testdata/cp-test.txt multinode-087362:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-087362 ssh -n multinode-087362 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-087362 cp multinode-087362:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2241054629/001/cp-test_multinode-087362.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-087362 ssh -n multinode-087362 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-087362 cp multinode-087362:/home/docker/cp-test.txt multinode-087362-m02:/home/docker/cp-test_multinode-087362_multinode-087362-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-087362 ssh -n multinode-087362 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-087362 ssh -n multinode-087362-m02 "sudo cat /home/docker/cp-test_multinode-087362_multinode-087362-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-087362 cp multinode-087362:/home/docker/cp-test.txt multinode-087362-m03:/home/docker/cp-test_multinode-087362_multinode-087362-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-087362 ssh -n multinode-087362 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-087362 ssh -n multinode-087362-m03 "sudo cat /home/docker/cp-test_multinode-087362_multinode-087362-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-087362 cp testdata/cp-test.txt multinode-087362-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-087362 ssh -n multinode-087362-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-087362 cp multinode-087362-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2241054629/001/cp-test_multinode-087362-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-087362 ssh -n multinode-087362-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-087362 cp multinode-087362-m02:/home/docker/cp-test.txt multinode-087362:/home/docker/cp-test_multinode-087362-m02_multinode-087362.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-087362 ssh -n multinode-087362-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-087362 ssh -n multinode-087362 "sudo cat /home/docker/cp-test_multinode-087362-m02_multinode-087362.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-087362 cp multinode-087362-m02:/home/docker/cp-test.txt multinode-087362-m03:/home/docker/cp-test_multinode-087362-m02_multinode-087362-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-087362 ssh -n multinode-087362-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-087362 ssh -n multinode-087362-m03 "sudo cat /home/docker/cp-test_multinode-087362-m02_multinode-087362-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-087362 cp testdata/cp-test.txt multinode-087362-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-087362 ssh -n multinode-087362-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-087362 cp multinode-087362-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2241054629/001/cp-test_multinode-087362-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-087362 ssh -n multinode-087362-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-087362 cp multinode-087362-m03:/home/docker/cp-test.txt multinode-087362:/home/docker/cp-test_multinode-087362-m03_multinode-087362.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-087362 ssh -n multinode-087362-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-087362 ssh -n multinode-087362 "sudo cat /home/docker/cp-test_multinode-087362-m03_multinode-087362.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-087362 cp multinode-087362-m03:/home/docker/cp-test.txt multinode-087362-m02:/home/docker/cp-test_multinode-087362-m03_multinode-087362-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-087362 ssh -n multinode-087362-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-087362 ssh -n multinode-087362-m02 "sudo cat /home/docker/cp-test_multinode-087362-m03_multinode-087362-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.03s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-087362 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-087362 node stop m03: (2.455472652s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-087362 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-087362 status: exit status 7 (407.375735ms)

                                                
                                                
-- stdout --
	multinode-087362
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-087362-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-087362-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-087362 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-087362 status --alsologtostderr: exit status 7 (410.324369ms)

                                                
                                                
-- stdout --
	multinode-087362
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-087362-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-087362-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 18:46:43.033156   46209 out.go:345] Setting OutFile to fd 1 ...
	I0829 18:46:43.033266   46209 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:46:43.033274   46209 out.go:358] Setting ErrFile to fd 2...
	I0829 18:46:43.033278   46209 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:46:43.033471   46209 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-13071/.minikube/bin
	I0829 18:46:43.033626   46209 out.go:352] Setting JSON to false
	I0829 18:46:43.033649   46209 mustload.go:65] Loading cluster: multinode-087362
	I0829 18:46:43.033764   46209 notify.go:220] Checking for updates...
	I0829 18:46:43.034032   46209 config.go:182] Loaded profile config "multinode-087362": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 18:46:43.034047   46209 status.go:255] checking status of multinode-087362 ...
	I0829 18:46:43.034544   46209 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0829 18:46:43.034612   46209 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:46:43.054407   46209 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40749
	I0829 18:46:43.054846   46209 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:46:43.055468   46209 main.go:141] libmachine: Using API Version  1
	I0829 18:46:43.055498   46209 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:46:43.055856   46209 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:46:43.056100   46209 main.go:141] libmachine: (multinode-087362) Calling .GetState
	I0829 18:46:43.057745   46209 status.go:330] multinode-087362 host status = "Running" (err=<nil>)
	I0829 18:46:43.057761   46209 host.go:66] Checking if "multinode-087362" exists ...
	I0829 18:46:43.058152   46209 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0829 18:46:43.058197   46209 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:46:43.073590   46209 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38041
	I0829 18:46:43.074019   46209 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:46:43.074469   46209 main.go:141] libmachine: Using API Version  1
	I0829 18:46:43.074492   46209 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:46:43.074822   46209 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:46:43.074980   46209 main.go:141] libmachine: (multinode-087362) Calling .GetIP
	I0829 18:46:43.077587   46209 main.go:141] libmachine: (multinode-087362) DBG | domain multinode-087362 has defined MAC address 52:54:00:87:6b:95 in network mk-multinode-087362
	I0829 18:46:43.077960   46209 main.go:141] libmachine: (multinode-087362) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:6b:95", ip: ""} in network mk-multinode-087362: {Iface:virbr1 ExpiryTime:2024-08-29 19:43:36 +0000 UTC Type:0 Mac:52:54:00:87:6b:95 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:multinode-087362 Clientid:01:52:54:00:87:6b:95}
	I0829 18:46:43.078016   46209 main.go:141] libmachine: (multinode-087362) DBG | domain multinode-087362 has defined IP address 192.168.39.233 and MAC address 52:54:00:87:6b:95 in network mk-multinode-087362
	I0829 18:46:43.078119   46209 host.go:66] Checking if "multinode-087362" exists ...
	I0829 18:46:43.078416   46209 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0829 18:46:43.078461   46209 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:46:43.093113   46209 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41565
	I0829 18:46:43.093488   46209 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:46:43.093897   46209 main.go:141] libmachine: Using API Version  1
	I0829 18:46:43.093921   46209 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:46:43.094245   46209 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:46:43.094424   46209 main.go:141] libmachine: (multinode-087362) Calling .DriverName
	I0829 18:46:43.094600   46209 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 18:46:43.094627   46209 main.go:141] libmachine: (multinode-087362) Calling .GetSSHHostname
	I0829 18:46:43.097511   46209 main.go:141] libmachine: (multinode-087362) DBG | domain multinode-087362 has defined MAC address 52:54:00:87:6b:95 in network mk-multinode-087362
	I0829 18:46:43.097900   46209 main.go:141] libmachine: (multinode-087362) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:6b:95", ip: ""} in network mk-multinode-087362: {Iface:virbr1 ExpiryTime:2024-08-29 19:43:36 +0000 UTC Type:0 Mac:52:54:00:87:6b:95 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:multinode-087362 Clientid:01:52:54:00:87:6b:95}
	I0829 18:46:43.097925   46209 main.go:141] libmachine: (multinode-087362) DBG | domain multinode-087362 has defined IP address 192.168.39.233 and MAC address 52:54:00:87:6b:95 in network mk-multinode-087362
	I0829 18:46:43.098111   46209 main.go:141] libmachine: (multinode-087362) Calling .GetSSHPort
	I0829 18:46:43.098305   46209 main.go:141] libmachine: (multinode-087362) Calling .GetSSHKeyPath
	I0829 18:46:43.098472   46209 main.go:141] libmachine: (multinode-087362) Calling .GetSSHUsername
	I0829 18:46:43.098633   46209 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13071/.minikube/machines/multinode-087362/id_rsa Username:docker}
	I0829 18:46:43.180930   46209 ssh_runner.go:195] Run: systemctl --version
	I0829 18:46:43.186581   46209 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 18:46:43.200875   46209 kubeconfig.go:125] found "multinode-087362" server: "https://192.168.39.233:8443"
	I0829 18:46:43.200912   46209 api_server.go:166] Checking apiserver status ...
	I0829 18:46:43.200951   46209 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 18:46:43.214133   46209 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1873/cgroup
	W0829 18:46:43.224097   46209 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1873/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0829 18:46:43.224166   46209 ssh_runner.go:195] Run: ls
	I0829 18:46:43.228175   46209 api_server.go:253] Checking apiserver healthz at https://192.168.39.233:8443/healthz ...
	I0829 18:46:43.232031   46209 api_server.go:279] https://192.168.39.233:8443/healthz returned 200:
	ok
	I0829 18:46:43.232051   46209 status.go:422] multinode-087362 apiserver status = Running (err=<nil>)
	I0829 18:46:43.232060   46209 status.go:257] multinode-087362 status: &{Name:multinode-087362 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 18:46:43.232074   46209 status.go:255] checking status of multinode-087362-m02 ...
	I0829 18:46:43.232362   46209 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0829 18:46:43.232401   46209 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:46:43.247421   46209 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35497
	I0829 18:46:43.247880   46209 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:46:43.248364   46209 main.go:141] libmachine: Using API Version  1
	I0829 18:46:43.248393   46209 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:46:43.248718   46209 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:46:43.248930   46209 main.go:141] libmachine: (multinode-087362-m02) Calling .GetState
	I0829 18:46:43.250554   46209 status.go:330] multinode-087362-m02 host status = "Running" (err=<nil>)
	I0829 18:46:43.250571   46209 host.go:66] Checking if "multinode-087362-m02" exists ...
	I0829 18:46:43.250875   46209 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0829 18:46:43.250914   46209 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:46:43.265645   46209 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33979
	I0829 18:46:43.266133   46209 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:46:43.266655   46209 main.go:141] libmachine: Using API Version  1
	I0829 18:46:43.266677   46209 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:46:43.266929   46209 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:46:43.267049   46209 main.go:141] libmachine: (multinode-087362-m02) Calling .GetIP
	I0829 18:46:43.269708   46209 main.go:141] libmachine: (multinode-087362-m02) DBG | domain multinode-087362-m02 has defined MAC address 52:54:00:1f:45:58 in network mk-multinode-087362
	I0829 18:46:43.270196   46209 main.go:141] libmachine: (multinode-087362-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:45:58", ip: ""} in network mk-multinode-087362: {Iface:virbr1 ExpiryTime:2024-08-29 19:44:50 +0000 UTC Type:0 Mac:52:54:00:1f:45:58 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:multinode-087362-m02 Clientid:01:52:54:00:1f:45:58}
	I0829 18:46:43.270222   46209 main.go:141] libmachine: (multinode-087362-m02) DBG | domain multinode-087362-m02 has defined IP address 192.168.39.194 and MAC address 52:54:00:1f:45:58 in network mk-multinode-087362
	I0829 18:46:43.270357   46209 host.go:66] Checking if "multinode-087362-m02" exists ...
	I0829 18:46:43.270733   46209 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0829 18:46:43.270785   46209 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:46:43.285489   46209 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39413
	I0829 18:46:43.285911   46209 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:46:43.286367   46209 main.go:141] libmachine: Using API Version  1
	I0829 18:46:43.286388   46209 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:46:43.286676   46209 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:46:43.286860   46209 main.go:141] libmachine: (multinode-087362-m02) Calling .DriverName
	I0829 18:46:43.287052   46209 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 18:46:43.287069   46209 main.go:141] libmachine: (multinode-087362-m02) Calling .GetSSHHostname
	I0829 18:46:43.289749   46209 main.go:141] libmachine: (multinode-087362-m02) DBG | domain multinode-087362-m02 has defined MAC address 52:54:00:1f:45:58 in network mk-multinode-087362
	I0829 18:46:43.290229   46209 main.go:141] libmachine: (multinode-087362-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:45:58", ip: ""} in network mk-multinode-087362: {Iface:virbr1 ExpiryTime:2024-08-29 19:44:50 +0000 UTC Type:0 Mac:52:54:00:1f:45:58 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:multinode-087362-m02 Clientid:01:52:54:00:1f:45:58}
	I0829 18:46:43.290258   46209 main.go:141] libmachine: (multinode-087362-m02) DBG | domain multinode-087362-m02 has defined IP address 192.168.39.194 and MAC address 52:54:00:1f:45:58 in network mk-multinode-087362
	I0829 18:46:43.290411   46209 main.go:141] libmachine: (multinode-087362-m02) Calling .GetSSHPort
	I0829 18:46:43.290587   46209 main.go:141] libmachine: (multinode-087362-m02) Calling .GetSSHKeyPath
	I0829 18:46:43.290721   46209 main.go:141] libmachine: (multinode-087362-m02) Calling .GetSSHUsername
	I0829 18:46:43.290860   46209 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13071/.minikube/machines/multinode-087362-m02/id_rsa Username:docker}
	I0829 18:46:43.368903   46209 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 18:46:43.383232   46209 status.go:257] multinode-087362-m02 status: &{Name:multinode-087362-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0829 18:46:43.383272   46209 status.go:255] checking status of multinode-087362-m03 ...
	I0829 18:46:43.383608   46209 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0829 18:46:43.383645   46209 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:46:43.399662   46209 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37037
	I0829 18:46:43.400176   46209 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:46:43.400777   46209 main.go:141] libmachine: Using API Version  1
	I0829 18:46:43.400816   46209 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:46:43.401143   46209 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:46:43.401344   46209 main.go:141] libmachine: (multinode-087362-m03) Calling .GetState
	I0829 18:46:43.402983   46209 status.go:330] multinode-087362-m03 host status = "Stopped" (err=<nil>)
	I0829 18:46:43.402998   46209 status.go:343] host is not running, skipping remaining checks
	I0829 18:46:43.403005   46209 status.go:257] multinode-087362-m03 status: &{Name:multinode-087362-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.27s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (41.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-087362 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-087362 node start m03 -v=7 --alsologtostderr: (41.362831773s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-087362 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (41.98s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (190.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-087362
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-087362
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-087362: (27.246572575s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-087362 --wait=true -v=8 --alsologtostderr
E0829 18:47:54.054531   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/functional-558069/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:49:11.839863   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/addons-661794/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-087362 --wait=true -v=8 --alsologtostderr: (2m43.355741618s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-087362
--- PASS: TestMultiNode/serial/RestartKeepsNodes (190.69s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-087362 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-087362 node delete m03: (1.697560337s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-087362 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.25s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (25.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-087362 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-087362 stop: (24.885015555s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-087362 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-087362 status: exit status 7 (79.121279ms)

                                                
                                                
-- stdout --
	multinode-087362
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-087362-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-087362 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-087362 status --alsologtostderr: exit status 7 (80.304099ms)

                                                
                                                
-- stdout --
	multinode-087362
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-087362-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 18:51:03.330886   48032 out.go:345] Setting OutFile to fd 1 ...
	I0829 18:51:03.331041   48032 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:51:03.331046   48032 out.go:358] Setting ErrFile to fd 2...
	I0829 18:51:03.331051   48032 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:51:03.331563   48032 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-13071/.minikube/bin
	I0829 18:51:03.331774   48032 out.go:352] Setting JSON to false
	I0829 18:51:03.331802   48032 mustload.go:65] Loading cluster: multinode-087362
	I0829 18:51:03.331843   48032 notify.go:220] Checking for updates...
	I0829 18:51:03.332178   48032 config.go:182] Loaded profile config "multinode-087362": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 18:51:03.332192   48032 status.go:255] checking status of multinode-087362 ...
	I0829 18:51:03.332575   48032 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0829 18:51:03.332621   48032 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:51:03.347340   48032 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34399
	I0829 18:51:03.347896   48032 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:51:03.348537   48032 main.go:141] libmachine: Using API Version  1
	I0829 18:51:03.348582   48032 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:51:03.349008   48032 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:51:03.349243   48032 main.go:141] libmachine: (multinode-087362) Calling .GetState
	I0829 18:51:03.351003   48032 status.go:330] multinode-087362 host status = "Stopped" (err=<nil>)
	I0829 18:51:03.351022   48032 status.go:343] host is not running, skipping remaining checks
	I0829 18:51:03.351031   48032 status.go:257] multinode-087362 status: &{Name:multinode-087362 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 18:51:03.351056   48032 status.go:255] checking status of multinode-087362-m02 ...
	I0829 18:51:03.351403   48032 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0829 18:51:03.351446   48032 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:51:03.366659   48032 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40389
	I0829 18:51:03.367194   48032 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:51:03.367712   48032 main.go:141] libmachine: Using API Version  1
	I0829 18:51:03.367736   48032 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:51:03.368063   48032 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:51:03.368256   48032 main.go:141] libmachine: (multinode-087362-m02) Calling .GetState
	I0829 18:51:03.369685   48032 status.go:330] multinode-087362-m02 host status = "Stopped" (err=<nil>)
	I0829 18:51:03.369701   48032 status.go:343] host is not running, skipping remaining checks
	I0829 18:51:03.369707   48032 status.go:257] multinode-087362-m02 status: &{Name:multinode-087362-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (25.04s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (114.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-087362 --wait=true -v=8 --alsologtostderr --driver=kvm2 
E0829 18:52:54.055248   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/functional-558069/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-087362 --wait=true -v=8 --alsologtostderr --driver=kvm2 : (1m53.841223001s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-087362 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (114.35s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (50.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-087362
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-087362-m02 --driver=kvm2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-087362-m02 --driver=kvm2 : exit status 14 (57.166214ms)

                                                
                                                
-- stdout --
	* [multinode-087362-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19531
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19531-13071/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-13071/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-087362-m02' is duplicated with machine name 'multinode-087362-m02' in profile 'multinode-087362'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-087362-m03 --driver=kvm2 
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-087362-m03 --driver=kvm2 : (49.707015108s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-087362
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-087362: exit status 80 (197.558594ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-087362 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-087362-m03 already exists in multinode-087362-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-087362-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (50.95s)

                                                
                                    
x
+
TestPreload (185.48s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-367839 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4
E0829 18:54:11.839287   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/addons-661794/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-367839 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4: (1m58.473878527s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-367839 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-367839 image pull gcr.io/k8s-minikube/busybox: (1.436458452s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-367839
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-367839: (12.597958555s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-367839 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-367839 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 : (51.904923701s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-367839 image list
helpers_test.go:175: Cleaning up "test-preload-367839" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-367839
--- PASS: TestPreload (185.48s)

                                                
                                    
x
+
TestScheduledStopUnix (120.27s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-550929 --memory=2048 --driver=kvm2 
E0829 18:57:14.908697   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/addons-661794/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-550929 --memory=2048 --driver=kvm2 : (48.754675128s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-550929 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-550929 -n scheduled-stop-550929
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-550929 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-550929 --cancel-scheduled
E0829 18:57:54.054287   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/functional-558069/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-550929 -n scheduled-stop-550929
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-550929
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-550929 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-550929
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-550929: exit status 7 (58.904618ms)

                                                
                                                
-- stdout --
	scheduled-stop-550929
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-550929 -n scheduled-stop-550929
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-550929 -n scheduled-stop-550929: exit status 7 (64.474518ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-550929" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-550929
--- PASS: TestScheduledStopUnix (120.27s)

                                                
                                    
x
+
TestSkaffold (126.82s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe902029664 version
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-231822 --memory=2600 --driver=kvm2 
E0829 18:59:11.839538   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/addons-661794/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-231822 --memory=2600 --driver=kvm2 : (46.536507976s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/KVM_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe902029664 run --minikube-profile skaffold-231822 --kube-context skaffold-231822 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe902029664 run --minikube-profile skaffold-231822 --kube-context skaffold-231822 --status-check=true --port-forward=false --interactive=false: (1m7.377915075s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-56db4f9d-6c2wh" [9caf297b-3727-44c2-9d9d-488232205b4f] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.003534046s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-6895bc8c65-sx9s6" [5166c2d9-80ef-42ec-92e7-96ff4583168a] Running
E0829 19:00:57.122465   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/functional-558069/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.003907063s
helpers_test.go:175: Cleaning up "skaffold-231822" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-231822
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-231822: (1.188756811s)
--- PASS: TestSkaffold (126.82s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (148.73s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2605213915 start -p running-upgrade-366358 --memory=2200 --vm-driver=kvm2 
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2605213915 start -p running-upgrade-366358 --memory=2200 --vm-driver=kvm2 : (1m7.194697963s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-366358 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-366358 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (1m19.203630159s)
helpers_test.go:175: Cleaning up "running-upgrade-366358" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-366358
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-366358: (1.812304925s)
--- PASS: TestRunningBinaryUpgrade (148.73s)

                                                
                                    
x
+
TestKubernetesUpgrade (147.72s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-078710 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-078710 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2 : (1m6.857128127s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-078710
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-078710: (3.305778571s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-078710 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-078710 status --format={{.Host}}: exit status 7 (62.205561ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-078710 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-078710 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2 : (42.630387405s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-078710 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-078710 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2 
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-078710 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2 : exit status 106 (89.938137ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-078710] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19531
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19531-13071/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-13071/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-078710
	    minikube start -p kubernetes-upgrade-078710 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0787102 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0, by running:
	    
	    minikube start -p kubernetes-upgrade-078710 --kubernetes-version=v1.31.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-078710 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-078710 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2 : (33.420225395s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-078710" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-078710
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-078710: (1.290730866s)
--- PASS: TestKubernetesUpgrade (147.72s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.51s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.51s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (180.72s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1931408786 start -p stopped-upgrade-730407 --memory=2200 --vm-driver=kvm2 
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1931408786 start -p stopped-upgrade-730407 --memory=2200 --vm-driver=kvm2 : (1m26.924189478s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1931408786 -p stopped-upgrade-730407 stop
E0829 19:07:12.501286   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/skaffold-231822/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1931408786 -p stopped-upgrade-730407 stop: (12.460284843s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-730407 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-730407 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (1m21.339477217s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (180.72s)

                                                
                                    
x
+
TestPause/serial/Start (100.22s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-296182 --memory=2048 --install-addons=false --wait=all --driver=kvm2 
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-296182 --memory=2048 --install-addons=false --wait=all --driver=kvm2 : (1m40.221189938s)
--- PASS: TestPause/serial/Start (100.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-561752 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-561752 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 : exit status 14 (80.088464ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-561752] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19531
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19531-13071/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-13071/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (89.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-561752 --driver=kvm2 
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-561752 --driver=kvm2 : (1m29.026646707s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-561752 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (89.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (135.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-178943 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 
E0829 19:07:54.054460   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/functional-558069/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-178943 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 : (2m15.740600489s)
--- PASS: TestNetworkPlugins/group/auto/Start (135.74s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (27.01s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-561752 --no-kubernetes --driver=kvm2 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-561752 --no-kubernetes --driver=kvm2 : (25.542320241s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-561752 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-561752 status -o json: exit status 2 (240.228792ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-561752","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-561752
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-561752: (1.222411794s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (27.01s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (51.83s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-296182 --alsologtostderr -v=1 --driver=kvm2 
E0829 19:08:34.423278   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/skaffold-231822/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-296182 --alsologtostderr -v=1 --driver=kvm2 : (51.801835421s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (51.83s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.11s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-730407
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-730407: (1.105541663s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (76.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-178943 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 
E0829 19:08:38.080234   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/gvisor-693858/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:08:39.362045   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/gvisor-693858/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:08:41.924386   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/gvisor-693858/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:08:47.046131   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/gvisor-693858/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-178943 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 : (1m16.640560119s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (76.64s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (44.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-561752 --no-kubernetes --driver=kvm2 
E0829 19:08:57.288082   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/gvisor-693858/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:09:11.840187   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/addons-661794/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:09:17.770072   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/gvisor-693858/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-561752 --no-kubernetes --driver=kvm2 : (44.102310971s)
--- PASS: TestNoKubernetes/serial/Start (44.10s)

                                                
                                    
x
+
TestPause/serial/Pause (0.61s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-296182 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.61s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.26s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-296182 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-296182 --output=json --layout=cluster: exit status 2 (260.866325ms)

                                                
                                                
-- stdout --
	{"Name":"pause-296182","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-296182","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.26s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.56s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-296182 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.56s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.77s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-296182 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.77s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.85s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-296182 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.85s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (1.31s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (1.306397821s)
--- PASS: TestPause/serial/VerifyDeletedResources (1.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (96.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-178943 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-178943 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 : (1m36.838372428s)
--- PASS: TestNetworkPlugins/group/calico/Start (96.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-561752 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-561752 "sudo systemctl is-active --quiet service kubelet": exit status 1 (230.202503ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-561752
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-561752: (2.297581452s)
--- PASS: TestNoKubernetes/serial/Stop (2.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (45.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-561752 --driver=kvm2 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-561752 --driver=kvm2 : (45.694463337s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (45.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-178943 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-178943 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-c2f2x" [78921c40-edf1-4a86-9ea5-dedf9c4ac5ce] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-c2f2x" [78921c40-edf1-4a86-9ea5-dedf9c4ac5ce] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004401489s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-gkphw" [eaf523ed-5ae0-431e-88bd-baf8b0b0a3bd] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005531772s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-178943 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-178943 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-178943 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-178943 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (14.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-178943 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-fz66q" [37c1f80a-4952-41b1-af9d-b26ab69b80cf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-fz66q" [37c1f80a-4952-41b1-af9d-b26ab69b80cf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 14.005096701s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (14.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (76.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-178943 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-178943 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 : (1m16.183675285s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (76.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-178943 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-178943 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-178943 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-561752 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-561752 "sudo systemctl is-active --quiet service kubelet": exit status 1 (247.320469ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (88.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-178943 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-178943 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 : (1m28.839289693s)
--- PASS: TestNetworkPlugins/group/false/Start (88.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (109.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-178943 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 
E0829 19:10:50.561418   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/skaffold-231822/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-178943 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 : (1m49.04876684s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (109.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-58425" [6a95f99b-ad69-4162-b744-fd0f4945c4e0] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005152885s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-178943 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-178943 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-5fgf6" [54ba65c2-8772-42d4-9661-c68703fd364f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-5fgf6" [54ba65c2-8772-42d4-9661-c68703fd364f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.004402304s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-178943 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-178943 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-178943 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-178943 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-178943 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-ccnbt" [5aef532a-a5bd-4377-b311-1c58f4a3a9a7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-ccnbt" [5aef532a-a5bd-4377-b311-1c58f4a3a9a7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.00490764s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (77.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-178943 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-178943 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 : (1m17.606310254s)
--- PASS: TestNetworkPlugins/group/flannel/Start (77.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-178943 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-178943 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-178943 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-178943 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (12.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-178943 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-9z4g2" [0b923d87-a31b-4c75-8cb3-aecdff153bc1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-9z4g2" [0b923d87-a31b-4c75-8cb3-aecdff153bc1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 12.005495643s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (12.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (104.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-178943 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-178943 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 : (1m44.12910118s)
--- PASS: TestNetworkPlugins/group/bridge/Start (104.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-178943 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-178943 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-178943 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-178943 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-178943 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-s6d7q" [ecf10ee2-6f9e-46cd-8837-9e971e34912e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-s6d7q" [ecf10ee2-6f9e-46cd-8837-9e971e34912e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.004583596s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (80.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-178943 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-178943 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 : (1m20.273397715s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (80.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-178943 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-178943 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-178943 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (194.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-036096 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-036096 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0: (3m14.091084374s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (194.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-kd79r" [23b9f259-1b7f-4fad-bfcb-5c0983339abc] Running
E0829 19:12:54.054499   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/functional-558069/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004455614s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-178943 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-178943 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-hc2gp" [33e8998a-4a96-496a-98f7-4bd636d789ac] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-hc2gp" [33e8998a-4a96-496a-98f7-4bd636d789ac] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.032106965s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-178943 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-178943 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-178943 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (90.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-284336 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.31.0
E0829 19:13:36.789632   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/gvisor-693858/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-284336 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.31.0: (1m30.30964386s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (90.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-178943 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (13.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-178943 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-vzpjk" [74084d1e-db14-418b-996a-8c81ccdb2aae] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-vzpjk" [74084d1e-db14-418b-996a-8c81ccdb2aae] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 13.004477602s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (13.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-178943 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (13.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-178943 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-7289f" [fd3ea04f-9ba3-4e60-8c58-1e94fe71c202] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-7289f" [fd3ea04f-9ba3-4e60-8c58-1e94fe71c202] Running
E0829 19:13:54.910648   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/addons-661794/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 13.007154706s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (13.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-178943 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-178943 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-178943 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-178943 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-178943 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-178943 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (101.91s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-923312 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-923312 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.31.0: (1m41.905420941s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (101.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (126.57s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-532454 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.31.0
E0829 19:14:45.299591   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/auto-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:14:45.306108   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/auto-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:14:45.317692   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/auto-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:14:45.339233   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/auto-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:14:45.381026   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/auto-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:14:45.463060   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/auto-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:14:45.624625   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/auto-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:14:45.946834   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/auto-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:14:46.589188   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/auto-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:14:47.870847   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/auto-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:14:50.432435   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/auto-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:14:54.102854   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/kindnet-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:14:54.109292   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/kindnet-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:14:54.120900   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/kindnet-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:14:54.142366   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/kindnet-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:14:54.183894   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/kindnet-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:14:54.265765   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/kindnet-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:14:54.427347   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/kindnet-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:14:54.749411   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/kindnet-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:14:55.391670   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/kindnet-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:14:55.554290   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/auto-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:14:56.673295   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/kindnet-178943/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-532454 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.31.0: (2m6.572675599s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (126.57s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-284336 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a50d30ec-3052-4853-b1e1-d5589352dd7f] Pending
helpers_test.go:344: "busybox" [a50d30ec-3052-4853-b1e1-d5589352dd7f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0829 19:14:59.234567   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/kindnet-178943/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [a50d30ec-3052-4853-b1e1-d5589352dd7f] Running
E0829 19:15:04.356050   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/kindnet-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:15:05.796010   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/auto-178943/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.00426415s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-284336 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-284336 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-284336 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (13.88s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-284336 --alsologtostderr -v=3
E0829 19:15:14.598061   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/kindnet-178943/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-284336 --alsologtostderr -v=3: (13.877316842s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (13.88s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-284336 -n no-preload-284336
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-284336 -n no-preload-284336: exit status 7 (163.491127ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-284336 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (309.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-284336 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.31.0
E0829 19:15:26.277387   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/auto-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:15:35.080307   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/kindnet-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:15:50.560515   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/skaffold-231822/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-284336 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.31.0: (5m8.988374311s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-284336 -n no-preload-284336
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (309.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-923312 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3348510d-a83c-49f5-be91-c42f95cb3f34] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0829 19:15:59.993005   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/calico-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:15:59.999422   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/calico-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:16:00.010877   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/calico-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:16:00.032365   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/calico-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:16:00.073933   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/calico-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:16:00.155405   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/calico-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:16:00.316915   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/calico-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:16:00.638258   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/calico-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:16:01.280053   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/calico-178943/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [3348510d-a83c-49f5-be91-c42f95cb3f34] Running
E0829 19:16:02.561395   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/calico-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:16:05.123537   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/calico-178943/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004299897s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-923312 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-036096 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [00f9581e-dcca-40d7-b0f5-875dbae36e0e] Pending
helpers_test.go:344: "busybox" [00f9581e-dcca-40d7-b0f5-875dbae36e0e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [00f9581e-dcca-40d7-b0f5-875dbae36e0e] Running
E0829 19:16:10.245739   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/calico-178943/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.004367119s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-036096 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.49s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-923312 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0829 19:16:07.239217   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/auto-178943/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-923312 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.68s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-923312 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-923312 --alsologtostderr -v=3: (12.678769676s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.68s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.89s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-036096 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-036096 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.64s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-036096 --alsologtostderr -v=3
E0829 19:16:16.042270   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/kindnet-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:16:20.488018   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/calico-178943/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-036096 --alsologtostderr -v=3: (12.637103917s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.64s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-923312 -n embed-certs-923312
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-923312 -n embed-certs-923312: exit status 7 (66.762501ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-923312 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (304.81s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-923312 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-923312 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.31.0: (5m4.554336789s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-923312 -n embed-certs-923312
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (304.81s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-532454 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c2a1db24-e6c1-42e6-a7f5-58d1ba57719f] Pending
helpers_test.go:344: "busybox" [c2a1db24-e6c1-42e6-a7f5-58d1ba57719f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c2a1db24-e6c1-42e6-a7f5-58d1ba57719f] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.00411045s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-532454 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-036096 -n old-k8s-version-036096
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-036096 -n old-k8s-version-036096: exit status 7 (70.036386ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-036096 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (422.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-036096 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0
E0829 19:16:30.620548   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/custom-flannel-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:16:30.627126   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/custom-flannel-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:16:30.638708   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/custom-flannel-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:16:30.660313   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/custom-flannel-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:16:30.701841   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/custom-flannel-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:16:30.783322   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/custom-flannel-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:16:30.944993   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/custom-flannel-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:16:31.267130   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/custom-flannel-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:16:31.908553   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/custom-flannel-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:16:33.190869   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/custom-flannel-178943/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-036096 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0: (7m1.7814823s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-036096 -n old-k8s-version-036096
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (422.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.98s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-532454 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-532454 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.98s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (13.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-532454 --alsologtostderr -v=3
E0829 19:16:35.753181   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/custom-flannel-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:16:40.874874   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/custom-flannel-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:16:40.969952   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/calico-178943/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-532454 --alsologtostderr -v=3: (13.34225176s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (13.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-532454 -n default-k8s-diff-port-532454
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-532454 -n default-k8s-diff-port-532454: exit status 7 (75.7886ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-532454 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (315.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-532454 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.31.0
E0829 19:16:51.116841   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/custom-flannel-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:16:54.257095   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/false-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:16:54.263545   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/false-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:16:54.275050   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/false-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:16:54.296617   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/false-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:16:54.338151   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/false-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:16:54.419594   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/false-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:16:54.581460   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/false-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:16:54.903270   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/false-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:16:55.544880   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/false-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:16:56.827231   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/false-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:16:59.389525   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/false-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:17:04.511492   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/false-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:17:11.598902   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/custom-flannel-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:17:14.753650   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/false-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:17:21.518699   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/enable-default-cni-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:17:21.525152   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/enable-default-cni-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:17:21.536619   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/enable-default-cni-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:17:21.558056   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/enable-default-cni-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:17:21.600318   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/enable-default-cni-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:17:21.681774   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/enable-default-cni-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:17:21.844075   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/enable-default-cni-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:17:21.931654   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/calico-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:17:22.166279   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/enable-default-cni-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:17:22.807950   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/enable-default-cni-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:17:24.089472   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/enable-default-cni-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:17:26.651214   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/enable-default-cni-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:17:29.160883   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/auto-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:17:31.773143   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/enable-default-cni-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:17:35.235586   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/false-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:17:37.124166   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/functional-558069/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:17:37.963725   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/kindnet-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:17:42.015382   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/enable-default-cni-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:17:52.561028   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/custom-flannel-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:17:52.842617   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/flannel-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:17:52.849051   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/flannel-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:17:52.860560   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/flannel-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:17:52.882037   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/flannel-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:17:52.923513   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/flannel-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:17:53.005170   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/flannel-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:17:53.166741   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/flannel-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:17:53.488516   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/flannel-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:17:54.054238   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/functional-558069/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:17:54.129824   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/flannel-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:17:55.412099   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/flannel-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:17:57.973614   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/flannel-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:18:02.497135   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/enable-default-cni-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:18:03.095695   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/flannel-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:18:13.337942   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/flannel-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:18:16.197591   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/false-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:18:33.819916   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/flannel-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:18:36.789809   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/gvisor-693858/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:18:43.459069   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/enable-default-cni-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:18:43.853860   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/calico-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:18:45.342338   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/kubenet-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:18:45.348749   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/kubenet-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:18:45.360166   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/kubenet-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:18:45.381652   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/kubenet-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:18:45.423276   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/kubenet-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:18:45.504701   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/kubenet-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:18:45.666374   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/kubenet-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:18:45.987907   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/kubenet-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:18:46.040418   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/bridge-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:18:46.046926   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/bridge-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:18:46.058350   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/bridge-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:18:46.079780   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/bridge-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:18:46.121848   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/bridge-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:18:46.203405   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/bridge-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:18:46.365654   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/bridge-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:18:46.629283   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/kubenet-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:18:46.687891   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/bridge-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:18:47.329260   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/bridge-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:18:47.910999   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/kubenet-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:18:48.611248   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/bridge-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:18:50.472448   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/kubenet-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:18:51.173520   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/bridge-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:18:55.594657   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/kubenet-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:18:56.295638   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/bridge-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:19:05.836523   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/kubenet-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:19:06.537197   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/bridge-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:19:11.839978   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/addons-661794/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:19:14.483175   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/custom-flannel-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:19:14.781845   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/flannel-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:19:26.317858   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/kubenet-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:19:27.019222   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/bridge-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:19:38.118987   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/false-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:19:45.299451   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/auto-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:19:54.102847   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/kindnet-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:20:05.380876   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/enable-default-cni-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:20:07.279282   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/kubenet-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:20:07.980998   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/bridge-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:20:13.002693   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/auto-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:20:21.805325   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/kindnet-178943/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-532454 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.31.0: (5m14.768948551s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-532454 -n default-k8s-diff-port-532454
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (315.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-sqd45" [e30a28bb-8c6d-442c-be07-43ef07524dd0] Running
E0829 19:20:36.704159   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/flannel-178943/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003766994s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-sqd45" [e30a28bb-8c6d-442c-be07-43ef07524dd0] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004897391s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-284336 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-284336 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-284336 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-284336 -n no-preload-284336
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-284336 -n no-preload-284336: exit status 2 (233.87813ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-284336 -n no-preload-284336
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-284336 -n no-preload-284336: exit status 2 (244.182588ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-284336 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-284336 -n no-preload-284336
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-284336 -n no-preload-284336
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.44s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (62.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-841695 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.31.0
E0829 19:20:50.561206   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/skaffold-231822/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:20:59.993342   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/calico-178943/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-841695 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.31.0: (1m2.293958526s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (62.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-jfg7z" [3b4ab018-28bf-453c-a358-dea18bf0a344] Running
E0829 19:21:27.696196   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/calico-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:21:29.200915   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/kubenet-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:21:29.902575   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/bridge-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:21:30.620756   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/custom-flannel-178943/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004775242s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-jfg7z" [3b4ab018-28bf-453c-a358-dea18bf0a344] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004424639s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-923312 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-923312 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.62s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-923312 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-923312 -n embed-certs-923312
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-923312 -n embed-certs-923312: exit status 2 (263.875678ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-923312 -n embed-certs-923312
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-923312 -n embed-certs-923312: exit status 2 (264.491468ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-923312 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-923312 -n embed-certs-923312
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-923312 -n embed-certs-923312
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.62s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.87s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-841695 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.87s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (8.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-841695 --alsologtostderr -v=3
E0829 19:21:54.256585   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/false-178943/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:21:58.325168   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/custom-flannel-178943/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-841695 --alsologtostderr -v=3: (8.330791396s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (8.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-841695 -n newest-cni-841695
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-841695 -n newest-cni-841695: exit status 7 (66.88078ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-841695 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (38.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-841695 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-841695 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.31.0: (37.949811076s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-841695 -n newest-cni-841695
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (38.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-nfl7k" [f5eda9c9-fafd-488c-8509-f784b26a6559] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004640895s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-nfl7k" [f5eda9c9-fafd-488c-8509-f784b26a6559] Running
E0829 19:22:13.626691   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/skaffold-231822/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00444694s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-532454 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-532454 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-532454 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-532454 -n default-k8s-diff-port-532454
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-532454 -n default-k8s-diff-port-532454: exit status 2 (231.997919ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-532454 -n default-k8s-diff-port-532454
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-532454 -n default-k8s-diff-port-532454: exit status 2 (239.255166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-532454 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-532454 -n default-k8s-diff-port-532454
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-532454 -n default-k8s-diff-port-532454
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-841695 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.47s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-841695 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-841695 -n newest-cni-841695
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-841695 -n newest-cni-841695: exit status 2 (243.182281ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-841695 -n newest-cni-841695
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-841695 -n newest-cni-841695: exit status 2 (250.189857ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-841695 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-841695 -n newest-cni-841695
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-841695 -n newest-cni-841695
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-t682l" [bd265b72-0b0b-4abd-8001-ca77ea95e8ce] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004139064s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-t682l" [bd265b72-0b0b-4abd-8001-ca77ea95e8ce] Running
E0829 19:23:36.790543   20250 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13071/.minikube/profiles/gvisor-693858/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003894235s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-036096 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-036096 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-036096 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-036096 -n old-k8s-version-036096
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-036096 -n old-k8s-version-036096: exit status 2 (236.243514ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-036096 -n old-k8s-version-036096
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-036096 -n old-k8s-version-036096: exit status 2 (237.500065ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-036096 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-036096 -n old-k8s-version-036096
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-036096 -n old-k8s-version-036096
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.25s)

                                                
                                    

Test skip (31/341)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-178943 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-178943

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-178943

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-178943

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-178943

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-178943

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-178943

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-178943

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-178943

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-178943

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-178943

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-178943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-178943"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-178943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-178943"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-178943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-178943"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-178943

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-178943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-178943"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-178943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-178943"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-178943" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-178943" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-178943" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-178943" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-178943" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-178943" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-178943" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-178943" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-178943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-178943"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-178943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-178943"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-178943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-178943"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-178943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-178943"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-178943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-178943"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-178943

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-178943

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-178943" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-178943" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-178943

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-178943

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-178943" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-178943" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-178943" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-178943" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-178943" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-178943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-178943"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-178943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-178943"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-178943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-178943"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-178943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-178943"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-178943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-178943"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-178943

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-178943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-178943"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-178943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-178943"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-178943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-178943"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-178943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-178943"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-178943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-178943"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-178943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-178943"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-178943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-178943"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-178943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-178943"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-178943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-178943"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-178943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-178943"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-178943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-178943"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-178943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-178943"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-178943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-178943"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-178943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-178943"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-178943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-178943"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-178943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-178943"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-178943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-178943"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-178943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-178943"

                                                
                                                
----------------------- debugLogs end: cilium-178943 [took: 3.126223617s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-178943" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-178943
--- SKIP: TestNetworkPlugins/group/cilium (3.28s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-345240" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-345240
--- SKIP: TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                    
Copied to clipboard