Test Report: KVM_Linux 19681

                    
                      58481425fd156c33d9cb9581f1bb301aacf19547:2024-09-25:36370
                    
                

Test fail (1/340)

Order failed test Duration
33 TestAddons/parallel/Registry 74.88
x
+
TestAddons/parallel/Registry (74.88s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 3.136683ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-jt9th" [2ec7da64-0a87-4bfe-a46b-b23794d946ae] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.006322789s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-r2w4s" [3d389aa0-4f88-41fd-a0f0-8ee90b81a8a3] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.087229826s
addons_test.go:338: (dbg) Run:  kubectl --context addons-608075 delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context addons-608075 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context addons-608075 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.130462695s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-608075 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: (dbg) Run:  out/minikube-linux-amd64 -p addons-608075 ip
2024/09/25 18:43:19 [DEBUG] GET http://192.168.39.81:5000
addons_test.go:386: (dbg) Run:  out/minikube-linux-amd64 -p addons-608075 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-608075 -n addons-608075
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-608075 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-608075 logs -n 25: (1.012012379s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-433203                                                                     | download-only-433203 | jenkins | v1.34.0 | 25 Sep 24 18:29 UTC | 25 Sep 24 18:29 UTC |
	| delete  | -p download-only-467975                                                                     | download-only-467975 | jenkins | v1.34.0 | 25 Sep 24 18:29 UTC | 25 Sep 24 18:29 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-724330 | jenkins | v1.34.0 | 25 Sep 24 18:29 UTC |                     |
	|         | binary-mirror-724330                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:41573                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-724330                                                                     | binary-mirror-724330 | jenkins | v1.34.0 | 25 Sep 24 18:29 UTC | 25 Sep 24 18:29 UTC |
	| addons  | enable dashboard -p                                                                         | addons-608075        | jenkins | v1.34.0 | 25 Sep 24 18:29 UTC |                     |
	|         | addons-608075                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-608075        | jenkins | v1.34.0 | 25 Sep 24 18:29 UTC |                     |
	|         | addons-608075                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-608075 --wait=true                                                                | addons-608075        | jenkins | v1.34.0 | 25 Sep 24 18:29 UTC | 25 Sep 24 18:33 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2  --addons=ingress                                                             |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	| addons  | addons-608075 addons disable                                                                | addons-608075        | jenkins | v1.34.0 | 25 Sep 24 18:33 UTC | 25 Sep 24 18:34 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-608075        | jenkins | v1.34.0 | 25 Sep 24 18:42 UTC | 25 Sep 24 18:42 UTC |
	|         | -p addons-608075                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-608075 addons disable                                                                | addons-608075        | jenkins | v1.34.0 | 25 Sep 24 18:42 UTC | 25 Sep 24 18:42 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | addons-608075 addons                                                                        | addons-608075        | jenkins | v1.34.0 | 25 Sep 24 18:42 UTC | 25 Sep 24 18:42 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-608075 addons disable                                                                | addons-608075        | jenkins | v1.34.0 | 25 Sep 24 18:42 UTC | 25 Sep 24 18:42 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-608075        | jenkins | v1.34.0 | 25 Sep 24 18:42 UTC | 25 Sep 24 18:42 UTC |
	|         | addons-608075                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-608075 ssh curl -s                                                                   | addons-608075        | jenkins | v1.34.0 | 25 Sep 24 18:42 UTC | 25 Sep 24 18:42 UTC |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ip      | addons-608075 ip                                                                            | addons-608075        | jenkins | v1.34.0 | 25 Sep 24 18:42 UTC | 25 Sep 24 18:42 UTC |
	| addons  | addons-608075 addons disable                                                                | addons-608075        | jenkins | v1.34.0 | 25 Sep 24 18:42 UTC | 25 Sep 24 18:42 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-608075        | jenkins | v1.34.0 | 25 Sep 24 18:42 UTC | 25 Sep 24 18:42 UTC |
	|         | addons-608075                                                                               |                      |         |         |                     |                     |
	| addons  | addons-608075 addons disable                                                                | addons-608075        | jenkins | v1.34.0 | 25 Sep 24 18:42 UTC | 25 Sep 24 18:42 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-608075        | jenkins | v1.34.0 | 25 Sep 24 18:42 UTC | 25 Sep 24 18:42 UTC |
	|         | -p addons-608075                                                                            |                      |         |         |                     |                     |
	| ssh     | addons-608075 ssh cat                                                                       | addons-608075        | jenkins | v1.34.0 | 25 Sep 24 18:42 UTC | 25 Sep 24 18:42 UTC |
	|         | /opt/local-path-provisioner/pvc-268deaac-a2f5-491e-8565-4ab0e1112f3d_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-608075 addons disable                                                                | addons-608075        | jenkins | v1.34.0 | 25 Sep 24 18:42 UTC | 25 Sep 24 18:42 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-608075 addons                                                                        | addons-608075        | jenkins | v1.34.0 | 25 Sep 24 18:43 UTC | 25 Sep 24 18:43 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-608075 addons                                                                        | addons-608075        | jenkins | v1.34.0 | 25 Sep 24 18:43 UTC | 25 Sep 24 18:43 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-608075 ip                                                                            | addons-608075        | jenkins | v1.34.0 | 25 Sep 24 18:43 UTC | 25 Sep 24 18:43 UTC |
	| addons  | addons-608075 addons disable                                                                | addons-608075        | jenkins | v1.34.0 | 25 Sep 24 18:43 UTC | 25 Sep 24 18:43 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/25 18:29:42
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0925 18:29:42.338199   13872 out.go:345] Setting OutFile to fd 1 ...
	I0925 18:29:42.338433   13872 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 18:29:42.338441   13872 out.go:358] Setting ErrFile to fd 2...
	I0925 18:29:42.338445   13872 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 18:29:42.338659   13872 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19681-6065/.minikube/bin
	I0925 18:29:42.339261   13872 out.go:352] Setting JSON to false
	I0925 18:29:42.340035   13872 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":725,"bootTime":1727288257,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0925 18:29:42.340125   13872 start.go:139] virtualization: kvm guest
	I0925 18:29:42.342053   13872 out.go:177] * [addons-608075] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0925 18:29:42.343598   13872 notify.go:220] Checking for updates...
	I0925 18:29:42.343633   13872 out.go:177]   - MINIKUBE_LOCATION=19681
	I0925 18:29:42.344832   13872 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 18:29:42.346145   13872 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19681-6065/kubeconfig
	I0925 18:29:42.347379   13872 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19681-6065/.minikube
	I0925 18:29:42.348530   13872 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0925 18:29:42.349785   13872 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 18:29:42.351227   13872 driver.go:394] Setting default libvirt URI to qemu:///system
	I0925 18:29:42.383169   13872 out.go:177] * Using the kvm2 driver based on user configuration
	I0925 18:29:42.384364   13872 start.go:297] selected driver: kvm2
	I0925 18:29:42.384376   13872 start.go:901] validating driver "kvm2" against <nil>
	I0925 18:29:42.384387   13872 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 18:29:42.385093   13872 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 18:29:42.385202   13872 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19681-6065/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0925 18:29:42.400651   13872 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0925 18:29:42.400710   13872 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0925 18:29:42.400958   13872 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0925 18:29:42.400993   13872 cni.go:84] Creating CNI manager for ""
	I0925 18:29:42.401036   13872 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 18:29:42.401049   13872 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0925 18:29:42.401101   13872 start.go:340] cluster config:
	{Name:addons-608075 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-608075 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:d
ocker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0925 18:29:42.401229   13872 iso.go:125] acquiring lock: {Name:mkac644039a90c04558d628f48440edffcc827c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 18:29:42.403072   13872 out.go:177] * Starting "addons-608075" primary control-plane node in "addons-608075" cluster
	I0925 18:29:42.404316   13872 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0925 18:29:42.404363   13872 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19681-6065/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0925 18:29:42.404376   13872 cache.go:56] Caching tarball of preloaded images
	I0925 18:29:42.404491   13872 preload.go:172] Found /home/jenkins/minikube-integration/19681-6065/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0925 18:29:42.404504   13872 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0925 18:29:42.404903   13872 profile.go:143] Saving config to /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/addons-608075/config.json ...
	I0925 18:29:42.404927   13872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/addons-608075/config.json: {Name:mk5aca4390e98e281fdf94809b6f8f464a892195 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 18:29:42.405080   13872 start.go:360] acquireMachinesLock for addons-608075: {Name:mk4be3f75270fc4d5982b5b3ed9860f7925b37a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 18:29:42.405141   13872 start.go:364] duration metric: took 45.141µs to acquireMachinesLock for "addons-608075"
	I0925 18:29:42.405158   13872 start.go:93] Provisioning new machine with config: &{Name:addons-608075 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-608075 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 18:29:42.405220   13872 start.go:125] createHost starting for "" (driver="kvm2")
	I0925 18:29:42.406889   13872 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0925 18:29:42.407018   13872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 18:29:42.407063   13872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 18:29:42.421291   13872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37027
	I0925 18:29:42.421684   13872 main.go:141] libmachine: () Calling .GetVersion
	I0925 18:29:42.422263   13872 main.go:141] libmachine: Using API Version  1
	I0925 18:29:42.422284   13872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 18:29:42.422607   13872 main.go:141] libmachine: () Calling .GetMachineName
	I0925 18:29:42.422786   13872 main.go:141] libmachine: (addons-608075) Calling .GetMachineName
	I0925 18:29:42.422927   13872 main.go:141] libmachine: (addons-608075) Calling .DriverName
	I0925 18:29:42.423071   13872 start.go:159] libmachine.API.Create for "addons-608075" (driver="kvm2")
	I0925 18:29:42.423100   13872 client.go:168] LocalClient.Create starting
	I0925 18:29:42.423144   13872 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19681-6065/.minikube/certs/ca.pem
	I0925 18:29:42.574478   13872 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19681-6065/.minikube/certs/cert.pem
	I0925 18:29:42.678800   13872 main.go:141] libmachine: Running pre-create checks...
	I0925 18:29:42.678829   13872 main.go:141] libmachine: (addons-608075) Calling .PreCreateCheck
	I0925 18:29:42.679307   13872 main.go:141] libmachine: (addons-608075) Calling .GetConfigRaw
	I0925 18:29:42.679718   13872 main.go:141] libmachine: Creating machine...
	I0925 18:29:42.679731   13872 main.go:141] libmachine: (addons-608075) Calling .Create
	I0925 18:29:42.679851   13872 main.go:141] libmachine: (addons-608075) Creating KVM machine...
	I0925 18:29:42.681117   13872 main.go:141] libmachine: (addons-608075) DBG | found existing default KVM network
	I0925 18:29:42.681894   13872 main.go:141] libmachine: (addons-608075) DBG | I0925 18:29:42.681741   13894 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ba0}
	I0925 18:29:42.681917   13872 main.go:141] libmachine: (addons-608075) DBG | created network xml: 
	I0925 18:29:42.681933   13872 main.go:141] libmachine: (addons-608075) DBG | <network>
	I0925 18:29:42.681941   13872 main.go:141] libmachine: (addons-608075) DBG |   <name>mk-addons-608075</name>
	I0925 18:29:42.681953   13872 main.go:141] libmachine: (addons-608075) DBG |   <dns enable='no'/>
	I0925 18:29:42.681960   13872 main.go:141] libmachine: (addons-608075) DBG |   
	I0925 18:29:42.681967   13872 main.go:141] libmachine: (addons-608075) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0925 18:29:42.681975   13872 main.go:141] libmachine: (addons-608075) DBG |     <dhcp>
	I0925 18:29:42.681981   13872 main.go:141] libmachine: (addons-608075) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0925 18:29:42.681985   13872 main.go:141] libmachine: (addons-608075) DBG |     </dhcp>
	I0925 18:29:42.681993   13872 main.go:141] libmachine: (addons-608075) DBG |   </ip>
	I0925 18:29:42.681999   13872 main.go:141] libmachine: (addons-608075) DBG |   
	I0925 18:29:42.682010   13872 main.go:141] libmachine: (addons-608075) DBG | </network>
	I0925 18:29:42.682021   13872 main.go:141] libmachine: (addons-608075) DBG | 
	I0925 18:29:42.687261   13872 main.go:141] libmachine: (addons-608075) DBG | trying to create private KVM network mk-addons-608075 192.168.39.0/24...
	I0925 18:29:42.753339   13872 main.go:141] libmachine: (addons-608075) DBG | private KVM network mk-addons-608075 192.168.39.0/24 created
	I0925 18:29:42.753368   13872 main.go:141] libmachine: (addons-608075) DBG | I0925 18:29:42.753315   13894 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19681-6065/.minikube
	I0925 18:29:42.753380   13872 main.go:141] libmachine: (addons-608075) Setting up store path in /home/jenkins/minikube-integration/19681-6065/.minikube/machines/addons-608075 ...
	I0925 18:29:42.753395   13872 main.go:141] libmachine: (addons-608075) Building disk image from file:///home/jenkins/minikube-integration/19681-6065/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0925 18:29:42.753456   13872 main.go:141] libmachine: (addons-608075) Downloading /home/jenkins/minikube-integration/19681-6065/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19681-6065/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0925 18:29:43.018051   13872 main.go:141] libmachine: (addons-608075) DBG | I0925 18:29:43.017936   13894 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19681-6065/.minikube/machines/addons-608075/id_rsa...
	I0925 18:29:43.161736   13872 main.go:141] libmachine: (addons-608075) DBG | I0925 18:29:43.161601   13894 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19681-6065/.minikube/machines/addons-608075/addons-608075.rawdisk...
	I0925 18:29:43.161775   13872 main.go:141] libmachine: (addons-608075) DBG | Writing magic tar header
	I0925 18:29:43.161791   13872 main.go:141] libmachine: (addons-608075) DBG | Writing SSH key tar header
	I0925 18:29:43.161804   13872 main.go:141] libmachine: (addons-608075) DBG | I0925 18:29:43.161747   13894 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19681-6065/.minikube/machines/addons-608075 ...
	I0925 18:29:43.161902   13872 main.go:141] libmachine: (addons-608075) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19681-6065/.minikube/machines/addons-608075
	I0925 18:29:43.161939   13872 main.go:141] libmachine: (addons-608075) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19681-6065/.minikube/machines
	I0925 18:29:43.161953   13872 main.go:141] libmachine: (addons-608075) Setting executable bit set on /home/jenkins/minikube-integration/19681-6065/.minikube/machines/addons-608075 (perms=drwx------)
	I0925 18:29:43.161976   13872 main.go:141] libmachine: (addons-608075) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19681-6065/.minikube
	I0925 18:29:43.161989   13872 main.go:141] libmachine: (addons-608075) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19681-6065
	I0925 18:29:43.162000   13872 main.go:141] libmachine: (addons-608075) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0925 18:29:43.162034   13872 main.go:141] libmachine: (addons-608075) Setting executable bit set on /home/jenkins/minikube-integration/19681-6065/.minikube/machines (perms=drwxr-xr-x)
	I0925 18:29:43.162058   13872 main.go:141] libmachine: (addons-608075) Setting executable bit set on /home/jenkins/minikube-integration/19681-6065/.minikube (perms=drwxr-xr-x)
	I0925 18:29:43.162068   13872 main.go:141] libmachine: (addons-608075) DBG | Checking permissions on dir: /home/jenkins
	I0925 18:29:43.162083   13872 main.go:141] libmachine: (addons-608075) DBG | Checking permissions on dir: /home
	I0925 18:29:43.162094   13872 main.go:141] libmachine: (addons-608075) DBG | Skipping /home - not owner
	I0925 18:29:43.162104   13872 main.go:141] libmachine: (addons-608075) Setting executable bit set on /home/jenkins/minikube-integration/19681-6065 (perms=drwxrwxr-x)
	I0925 18:29:43.162123   13872 main.go:141] libmachine: (addons-608075) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0925 18:29:43.162135   13872 main.go:141] libmachine: (addons-608075) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0925 18:29:43.162148   13872 main.go:141] libmachine: (addons-608075) Creating domain...
	I0925 18:29:43.163034   13872 main.go:141] libmachine: (addons-608075) define libvirt domain using xml: 
	I0925 18:29:43.163061   13872 main.go:141] libmachine: (addons-608075) <domain type='kvm'>
	I0925 18:29:43.163071   13872 main.go:141] libmachine: (addons-608075)   <name>addons-608075</name>
	I0925 18:29:43.163079   13872 main.go:141] libmachine: (addons-608075)   <memory unit='MiB'>4000</memory>
	I0925 18:29:43.163088   13872 main.go:141] libmachine: (addons-608075)   <vcpu>2</vcpu>
	I0925 18:29:43.163095   13872 main.go:141] libmachine: (addons-608075)   <features>
	I0925 18:29:43.163105   13872 main.go:141] libmachine: (addons-608075)     <acpi/>
	I0925 18:29:43.163110   13872 main.go:141] libmachine: (addons-608075)     <apic/>
	I0925 18:29:43.163118   13872 main.go:141] libmachine: (addons-608075)     <pae/>
	I0925 18:29:43.163128   13872 main.go:141] libmachine: (addons-608075)     
	I0925 18:29:43.163152   13872 main.go:141] libmachine: (addons-608075)   </features>
	I0925 18:29:43.163170   13872 main.go:141] libmachine: (addons-608075)   <cpu mode='host-passthrough'>
	I0925 18:29:43.163178   13872 main.go:141] libmachine: (addons-608075)   
	I0925 18:29:43.163188   13872 main.go:141] libmachine: (addons-608075)   </cpu>
	I0925 18:29:43.163200   13872 main.go:141] libmachine: (addons-608075)   <os>
	I0925 18:29:43.163210   13872 main.go:141] libmachine: (addons-608075)     <type>hvm</type>
	I0925 18:29:43.163219   13872 main.go:141] libmachine: (addons-608075)     <boot dev='cdrom'/>
	I0925 18:29:43.163229   13872 main.go:141] libmachine: (addons-608075)     <boot dev='hd'/>
	I0925 18:29:43.163237   13872 main.go:141] libmachine: (addons-608075)     <bootmenu enable='no'/>
	I0925 18:29:43.163242   13872 main.go:141] libmachine: (addons-608075)   </os>
	I0925 18:29:43.163262   13872 main.go:141] libmachine: (addons-608075)   <devices>
	I0925 18:29:43.163280   13872 main.go:141] libmachine: (addons-608075)     <disk type='file' device='cdrom'>
	I0925 18:29:43.163290   13872 main.go:141] libmachine: (addons-608075)       <source file='/home/jenkins/minikube-integration/19681-6065/.minikube/machines/addons-608075/boot2docker.iso'/>
	I0925 18:29:43.163305   13872 main.go:141] libmachine: (addons-608075)       <target dev='hdc' bus='scsi'/>
	I0925 18:29:43.163317   13872 main.go:141] libmachine: (addons-608075)       <readonly/>
	I0925 18:29:43.163327   13872 main.go:141] libmachine: (addons-608075)     </disk>
	I0925 18:29:43.163340   13872 main.go:141] libmachine: (addons-608075)     <disk type='file' device='disk'>
	I0925 18:29:43.163352   13872 main.go:141] libmachine: (addons-608075)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0925 18:29:43.163367   13872 main.go:141] libmachine: (addons-608075)       <source file='/home/jenkins/minikube-integration/19681-6065/.minikube/machines/addons-608075/addons-608075.rawdisk'/>
	I0925 18:29:43.163377   13872 main.go:141] libmachine: (addons-608075)       <target dev='hda' bus='virtio'/>
	I0925 18:29:43.163388   13872 main.go:141] libmachine: (addons-608075)     </disk>
	I0925 18:29:43.163401   13872 main.go:141] libmachine: (addons-608075)     <interface type='network'>
	I0925 18:29:43.163413   13872 main.go:141] libmachine: (addons-608075)       <source network='mk-addons-608075'/>
	I0925 18:29:43.163424   13872 main.go:141] libmachine: (addons-608075)       <model type='virtio'/>
	I0925 18:29:43.163437   13872 main.go:141] libmachine: (addons-608075)     </interface>
	I0925 18:29:43.163447   13872 main.go:141] libmachine: (addons-608075)     <interface type='network'>
	I0925 18:29:43.163465   13872 main.go:141] libmachine: (addons-608075)       <source network='default'/>
	I0925 18:29:43.163477   13872 main.go:141] libmachine: (addons-608075)       <model type='virtio'/>
	I0925 18:29:43.163487   13872 main.go:141] libmachine: (addons-608075)     </interface>
	I0925 18:29:43.163497   13872 main.go:141] libmachine: (addons-608075)     <serial type='pty'>
	I0925 18:29:43.163509   13872 main.go:141] libmachine: (addons-608075)       <target port='0'/>
	I0925 18:29:43.163518   13872 main.go:141] libmachine: (addons-608075)     </serial>
	I0925 18:29:43.163529   13872 main.go:141] libmachine: (addons-608075)     <console type='pty'>
	I0925 18:29:43.163539   13872 main.go:141] libmachine: (addons-608075)       <target type='serial' port='0'/>
	I0925 18:29:43.163549   13872 main.go:141] libmachine: (addons-608075)     </console>
	I0925 18:29:43.163558   13872 main.go:141] libmachine: (addons-608075)     <rng model='virtio'>
	I0925 18:29:43.163572   13872 main.go:141] libmachine: (addons-608075)       <backend model='random'>/dev/random</backend>
	I0925 18:29:43.163583   13872 main.go:141] libmachine: (addons-608075)     </rng>
	I0925 18:29:43.163593   13872 main.go:141] libmachine: (addons-608075)     
	I0925 18:29:43.163603   13872 main.go:141] libmachine: (addons-608075)     
	I0925 18:29:43.163612   13872 main.go:141] libmachine: (addons-608075)   </devices>
	I0925 18:29:43.163622   13872 main.go:141] libmachine: (addons-608075) </domain>
	I0925 18:29:43.163631   13872 main.go:141] libmachine: (addons-608075) 
	I0925 18:29:43.169882   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined MAC address 52:54:00:29:38:cb in network default
	I0925 18:29:43.170441   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:29:43.170461   13872 main.go:141] libmachine: (addons-608075) Ensuring networks are active...
	I0925 18:29:43.171240   13872 main.go:141] libmachine: (addons-608075) Ensuring network default is active
	I0925 18:29:43.171537   13872 main.go:141] libmachine: (addons-608075) Ensuring network mk-addons-608075 is active
	I0925 18:29:43.171974   13872 main.go:141] libmachine: (addons-608075) Getting domain xml...
	I0925 18:29:43.172596   13872 main.go:141] libmachine: (addons-608075) Creating domain...
	I0925 18:29:44.759480   13872 main.go:141] libmachine: (addons-608075) Waiting to get IP...
	I0925 18:29:44.760168   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:29:44.760466   13872 main.go:141] libmachine: (addons-608075) DBG | unable to find current IP address of domain addons-608075 in network mk-addons-608075
	I0925 18:29:44.760484   13872 main.go:141] libmachine: (addons-608075) DBG | I0925 18:29:44.760453   13894 retry.go:31] will retry after 205.501052ms: waiting for machine to come up
	I0925 18:29:44.967866   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:29:44.968279   13872 main.go:141] libmachine: (addons-608075) DBG | unable to find current IP address of domain addons-608075 in network mk-addons-608075
	I0925 18:29:44.968307   13872 main.go:141] libmachine: (addons-608075) DBG | I0925 18:29:44.968229   13894 retry.go:31] will retry after 359.272907ms: waiting for machine to come up
	I0925 18:29:45.328778   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:29:45.329184   13872 main.go:141] libmachine: (addons-608075) DBG | unable to find current IP address of domain addons-608075 in network mk-addons-608075
	I0925 18:29:45.329205   13872 main.go:141] libmachine: (addons-608075) DBG | I0925 18:29:45.329148   13894 retry.go:31] will retry after 369.700059ms: waiting for machine to come up
	I0925 18:29:45.700777   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:29:45.701263   13872 main.go:141] libmachine: (addons-608075) DBG | unable to find current IP address of domain addons-608075 in network mk-addons-608075
	I0925 18:29:45.701305   13872 main.go:141] libmachine: (addons-608075) DBG | I0925 18:29:45.701220   13894 retry.go:31] will retry after 368.291821ms: waiting for machine to come up
	I0925 18:29:46.071538   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:29:46.071953   13872 main.go:141] libmachine: (addons-608075) DBG | unable to find current IP address of domain addons-608075 in network mk-addons-608075
	I0925 18:29:46.071977   13872 main.go:141] libmachine: (addons-608075) DBG | I0925 18:29:46.071916   13894 retry.go:31] will retry after 466.022179ms: waiting for machine to come up
	I0925 18:29:46.539439   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:29:46.539828   13872 main.go:141] libmachine: (addons-608075) DBG | unable to find current IP address of domain addons-608075 in network mk-addons-608075
	I0925 18:29:46.539860   13872 main.go:141] libmachine: (addons-608075) DBG | I0925 18:29:46.539810   13894 retry.go:31] will retry after 592.749419ms: waiting for machine to come up
	I0925 18:29:47.134915   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:29:47.135354   13872 main.go:141] libmachine: (addons-608075) DBG | unable to find current IP address of domain addons-608075 in network mk-addons-608075
	I0925 18:29:47.135380   13872 main.go:141] libmachine: (addons-608075) DBG | I0925 18:29:47.135314   13894 retry.go:31] will retry after 756.475059ms: waiting for machine to come up
	I0925 18:29:47.893206   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:29:47.893625   13872 main.go:141] libmachine: (addons-608075) DBG | unable to find current IP address of domain addons-608075 in network mk-addons-608075
	I0925 18:29:47.893653   13872 main.go:141] libmachine: (addons-608075) DBG | I0925 18:29:47.893604   13894 retry.go:31] will retry after 1.141759728s: waiting for machine to come up
	I0925 18:29:49.036835   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:29:49.037194   13872 main.go:141] libmachine: (addons-608075) DBG | unable to find current IP address of domain addons-608075 in network mk-addons-608075
	I0925 18:29:49.037220   13872 main.go:141] libmachine: (addons-608075) DBG | I0925 18:29:49.037155   13894 retry.go:31] will retry after 1.645095015s: waiting for machine to come up
	I0925 18:29:50.684861   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:29:50.685271   13872 main.go:141] libmachine: (addons-608075) DBG | unable to find current IP address of domain addons-608075 in network mk-addons-608075
	I0925 18:29:50.685298   13872 main.go:141] libmachine: (addons-608075) DBG | I0925 18:29:50.685224   13894 retry.go:31] will retry after 1.440352875s: waiting for machine to come up
	I0925 18:29:52.126670   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:29:52.127088   13872 main.go:141] libmachine: (addons-608075) DBG | unable to find current IP address of domain addons-608075 in network mk-addons-608075
	I0925 18:29:52.127108   13872 main.go:141] libmachine: (addons-608075) DBG | I0925 18:29:52.127056   13894 retry.go:31] will retry after 1.996929371s: waiting for machine to come up
	I0925 18:29:54.126248   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:29:54.126633   13872 main.go:141] libmachine: (addons-608075) DBG | unable to find current IP address of domain addons-608075 in network mk-addons-608075
	I0925 18:29:54.126665   13872 main.go:141] libmachine: (addons-608075) DBG | I0925 18:29:54.126583   13894 retry.go:31] will retry after 2.606192062s: waiting for machine to come up
	I0925 18:29:56.736336   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:29:56.736744   13872 main.go:141] libmachine: (addons-608075) DBG | unable to find current IP address of domain addons-608075 in network mk-addons-608075
	I0925 18:29:56.736766   13872 main.go:141] libmachine: (addons-608075) DBG | I0925 18:29:56.736696   13894 retry.go:31] will retry after 3.961149831s: waiting for machine to come up
	I0925 18:30:00.702281   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:30:00.702680   13872 main.go:141] libmachine: (addons-608075) DBG | unable to find current IP address of domain addons-608075 in network mk-addons-608075
	I0925 18:30:00.702720   13872 main.go:141] libmachine: (addons-608075) DBG | I0925 18:30:00.702673   13894 retry.go:31] will retry after 4.910248498s: waiting for machine to come up
	I0925 18:30:05.614246   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:30:05.614675   13872 main.go:141] libmachine: (addons-608075) Found IP for machine: 192.168.39.81
	I0925 18:30:05.614693   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has current primary IP address 192.168.39.81 and MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:30:05.614699   13872 main.go:141] libmachine: (addons-608075) Reserving static IP address...
	I0925 18:30:05.615059   13872 main.go:141] libmachine: (addons-608075) DBG | unable to find host DHCP lease matching {name: "addons-608075", mac: "52:54:00:0e:fd:f4", ip: "192.168.39.81"} in network mk-addons-608075
	I0925 18:30:05.689066   13872 main.go:141] libmachine: (addons-608075) DBG | Getting to WaitForSSH function...
	I0925 18:30:05.689091   13872 main.go:141] libmachine: (addons-608075) Reserved static IP address: 192.168.39.81
	I0925 18:30:05.689102   13872 main.go:141] libmachine: (addons-608075) Waiting for SSH to be available...
	I0925 18:30:05.691795   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:30:05.692198   13872 main.go:141] libmachine: (addons-608075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:fd:f4", ip: ""} in network mk-addons-608075: {Iface:virbr1 ExpiryTime:2024-09-25 19:29:58 +0000 UTC Type:0 Mac:52:54:00:0e:fd:f4 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:minikube Clientid:01:52:54:00:0e:fd:f4}
	I0925 18:30:05.692225   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined IP address 192.168.39.81 and MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:30:05.692431   13872 main.go:141] libmachine: (addons-608075) DBG | Using SSH client type: external
	I0925 18:30:05.692444   13872 main.go:141] libmachine: (addons-608075) DBG | Using SSH private key: /home/jenkins/minikube-integration/19681-6065/.minikube/machines/addons-608075/id_rsa (-rw-------)
	I0925 18:30:05.692474   13872 main.go:141] libmachine: (addons-608075) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.81 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19681-6065/.minikube/machines/addons-608075/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0925 18:30:05.692487   13872 main.go:141] libmachine: (addons-608075) DBG | About to run SSH command:
	I0925 18:30:05.692500   13872 main.go:141] libmachine: (addons-608075) DBG | exit 0
	I0925 18:30:05.829677   13872 main.go:141] libmachine: (addons-608075) DBG | SSH cmd err, output: <nil>: 
	I0925 18:30:05.829935   13872 main.go:141] libmachine: (addons-608075) KVM machine creation complete!
	I0925 18:30:05.830239   13872 main.go:141] libmachine: (addons-608075) Calling .GetConfigRaw
	I0925 18:30:05.830795   13872 main.go:141] libmachine: (addons-608075) Calling .DriverName
	I0925 18:30:05.830968   13872 main.go:141] libmachine: (addons-608075) Calling .DriverName
	I0925 18:30:05.831162   13872 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0925 18:30:05.831174   13872 main.go:141] libmachine: (addons-608075) Calling .GetState
	I0925 18:30:05.832478   13872 main.go:141] libmachine: Detecting operating system of created instance...
	I0925 18:30:05.832490   13872 main.go:141] libmachine: Waiting for SSH to be available...
	I0925 18:30:05.832500   13872 main.go:141] libmachine: Getting to WaitForSSH function...
	I0925 18:30:05.832507   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHHostname
	I0925 18:30:05.834893   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:30:05.835208   13872 main.go:141] libmachine: (addons-608075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:fd:f4", ip: ""} in network mk-addons-608075: {Iface:virbr1 ExpiryTime:2024-09-25 19:29:58 +0000 UTC Type:0 Mac:52:54:00:0e:fd:f4 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-608075 Clientid:01:52:54:00:0e:fd:f4}
	I0925 18:30:05.835240   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined IP address 192.168.39.81 and MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:30:05.835344   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHPort
	I0925 18:30:05.835514   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHKeyPath
	I0925 18:30:05.835646   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHKeyPath
	I0925 18:30:05.835793   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHUsername
	I0925 18:30:05.835919   13872 main.go:141] libmachine: Using SSH client type: native
	I0925 18:30:05.836143   13872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.81 22 <nil> <nil>}
	I0925 18:30:05.836154   13872 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0925 18:30:05.948901   13872 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0925 18:30:05.948921   13872 main.go:141] libmachine: Detecting the provisioner...
	I0925 18:30:05.948928   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHHostname
	I0925 18:30:05.951339   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:30:05.951673   13872 main.go:141] libmachine: (addons-608075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:fd:f4", ip: ""} in network mk-addons-608075: {Iface:virbr1 ExpiryTime:2024-09-25 19:29:58 +0000 UTC Type:0 Mac:52:54:00:0e:fd:f4 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-608075 Clientid:01:52:54:00:0e:fd:f4}
	I0925 18:30:05.951706   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined IP address 192.168.39.81 and MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:30:05.951808   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHPort
	I0925 18:30:05.951999   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHKeyPath
	I0925 18:30:05.952168   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHKeyPath
	I0925 18:30:05.952289   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHUsername
	I0925 18:30:05.952439   13872 main.go:141] libmachine: Using SSH client type: native
	I0925 18:30:05.952598   13872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.81 22 <nil> <nil>}
	I0925 18:30:05.952607   13872 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0925 18:30:06.066390   13872 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0925 18:30:06.066457   13872 main.go:141] libmachine: found compatible host: buildroot
	I0925 18:30:06.066465   13872 main.go:141] libmachine: Provisioning with buildroot...
	I0925 18:30:06.066475   13872 main.go:141] libmachine: (addons-608075) Calling .GetMachineName
	I0925 18:30:06.066744   13872 buildroot.go:166] provisioning hostname "addons-608075"
	I0925 18:30:06.066771   13872 main.go:141] libmachine: (addons-608075) Calling .GetMachineName
	I0925 18:30:06.066909   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHHostname
	I0925 18:30:06.069537   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:30:06.069959   13872 main.go:141] libmachine: (addons-608075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:fd:f4", ip: ""} in network mk-addons-608075: {Iface:virbr1 ExpiryTime:2024-09-25 19:29:58 +0000 UTC Type:0 Mac:52:54:00:0e:fd:f4 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-608075 Clientid:01:52:54:00:0e:fd:f4}
	I0925 18:30:06.069988   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined IP address 192.168.39.81 and MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:30:06.070120   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHPort
	I0925 18:30:06.070283   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHKeyPath
	I0925 18:30:06.070429   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHKeyPath
	I0925 18:30:06.070544   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHUsername
	I0925 18:30:06.070705   13872 main.go:141] libmachine: Using SSH client type: native
	I0925 18:30:06.070866   13872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.81 22 <nil> <nil>}
	I0925 18:30:06.070877   13872 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-608075 && echo "addons-608075" | sudo tee /etc/hostname
	I0925 18:30:06.200798   13872 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-608075
	
	I0925 18:30:06.200844   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHHostname
	I0925 18:30:06.203475   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:30:06.203813   13872 main.go:141] libmachine: (addons-608075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:fd:f4", ip: ""} in network mk-addons-608075: {Iface:virbr1 ExpiryTime:2024-09-25 19:29:58 +0000 UTC Type:0 Mac:52:54:00:0e:fd:f4 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-608075 Clientid:01:52:54:00:0e:fd:f4}
	I0925 18:30:06.203839   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined IP address 192.168.39.81 and MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:30:06.204005   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHPort
	I0925 18:30:06.204217   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHKeyPath
	I0925 18:30:06.204354   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHKeyPath
	I0925 18:30:06.204501   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHUsername
	I0925 18:30:06.204648   13872 main.go:141] libmachine: Using SSH client type: native
	I0925 18:30:06.204804   13872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.81 22 <nil> <nil>}
	I0925 18:30:06.204818   13872 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-608075' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-608075/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-608075' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0925 18:30:06.326759   13872 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0925 18:30:06.326791   13872 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19681-6065/.minikube CaCertPath:/home/jenkins/minikube-integration/19681-6065/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19681-6065/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19681-6065/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19681-6065/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19681-6065/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19681-6065/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19681-6065/.minikube}
	I0925 18:30:06.326835   13872 buildroot.go:174] setting up certificates
	I0925 18:30:06.326844   13872 provision.go:84] configureAuth start
	I0925 18:30:06.326854   13872 main.go:141] libmachine: (addons-608075) Calling .GetMachineName
	I0925 18:30:06.327150   13872 main.go:141] libmachine: (addons-608075) Calling .GetIP
	I0925 18:30:06.329573   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:30:06.329911   13872 main.go:141] libmachine: (addons-608075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:fd:f4", ip: ""} in network mk-addons-608075: {Iface:virbr1 ExpiryTime:2024-09-25 19:29:58 +0000 UTC Type:0 Mac:52:54:00:0e:fd:f4 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-608075 Clientid:01:52:54:00:0e:fd:f4}
	I0925 18:30:06.329936   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined IP address 192.168.39.81 and MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:30:06.330089   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHHostname
	I0925 18:30:06.332141   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:30:06.332485   13872 main.go:141] libmachine: (addons-608075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:fd:f4", ip: ""} in network mk-addons-608075: {Iface:virbr1 ExpiryTime:2024-09-25 19:29:58 +0000 UTC Type:0 Mac:52:54:00:0e:fd:f4 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-608075 Clientid:01:52:54:00:0e:fd:f4}
	I0925 18:30:06.332513   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined IP address 192.168.39.81 and MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:30:06.332617   13872 provision.go:143] copyHostCerts
	I0925 18:30:06.332698   13872 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19681-6065/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19681-6065/.minikube/ca.pem (1082 bytes)
	I0925 18:30:06.332822   13872 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19681-6065/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19681-6065/.minikube/cert.pem (1123 bytes)
	I0925 18:30:06.332883   13872 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19681-6065/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19681-6065/.minikube/key.pem (1675 bytes)
	I0925 18:30:06.332933   13872 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19681-6065/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19681-6065/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19681-6065/.minikube/certs/ca-key.pem org=jenkins.addons-608075 san=[127.0.0.1 192.168.39.81 addons-608075 localhost minikube]
	I0925 18:30:06.647453   13872 provision.go:177] copyRemoteCerts
	I0925 18:30:06.647511   13872 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0925 18:30:06.647532   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHHostname
	I0925 18:30:06.650234   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:30:06.650558   13872 main.go:141] libmachine: (addons-608075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:fd:f4", ip: ""} in network mk-addons-608075: {Iface:virbr1 ExpiryTime:2024-09-25 19:29:58 +0000 UTC Type:0 Mac:52:54:00:0e:fd:f4 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-608075 Clientid:01:52:54:00:0e:fd:f4}
	I0925 18:30:06.650588   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined IP address 192.168.39.81 and MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:30:06.650714   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHPort
	I0925 18:30:06.650891   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHKeyPath
	I0925 18:30:06.651021   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHUsername
	I0925 18:30:06.651127   13872 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19681-6065/.minikube/machines/addons-608075/id_rsa Username:docker}
	I0925 18:30:06.736664   13872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19681-6065/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0925 18:30:06.762290   13872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19681-6065/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0925 18:30:06.786966   13872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19681-6065/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0925 18:30:06.814778   13872 provision.go:87] duration metric: took 487.92136ms to configureAuth
	I0925 18:30:06.814805   13872 buildroot.go:189] setting minikube options for container-runtime
	I0925 18:30:06.814966   13872 config.go:182] Loaded profile config "addons-608075": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0925 18:30:06.814986   13872 main.go:141] libmachine: (addons-608075) Calling .DriverName
	I0925 18:30:06.815276   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHHostname
	I0925 18:30:06.817717   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:30:06.818101   13872 main.go:141] libmachine: (addons-608075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:fd:f4", ip: ""} in network mk-addons-608075: {Iface:virbr1 ExpiryTime:2024-09-25 19:29:58 +0000 UTC Type:0 Mac:52:54:00:0e:fd:f4 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-608075 Clientid:01:52:54:00:0e:fd:f4}
	I0925 18:30:06.818119   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined IP address 192.168.39.81 and MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:30:06.818275   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHPort
	I0925 18:30:06.818465   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHKeyPath
	I0925 18:30:06.818629   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHKeyPath
	I0925 18:30:06.818779   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHUsername
	I0925 18:30:06.818928   13872 main.go:141] libmachine: Using SSH client type: native
	I0925 18:30:06.819129   13872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.81 22 <nil> <nil>}
	I0925 18:30:06.819142   13872 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0925 18:30:06.935295   13872 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0925 18:30:06.935318   13872 buildroot.go:70] root file system type: tmpfs
	I0925 18:30:06.935455   13872 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0925 18:30:06.935479   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHHostname
	I0925 18:30:06.938355   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:30:06.938724   13872 main.go:141] libmachine: (addons-608075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:fd:f4", ip: ""} in network mk-addons-608075: {Iface:virbr1 ExpiryTime:2024-09-25 19:29:58 +0000 UTC Type:0 Mac:52:54:00:0e:fd:f4 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-608075 Clientid:01:52:54:00:0e:fd:f4}
	I0925 18:30:06.938754   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined IP address 192.168.39.81 and MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:30:06.938939   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHPort
	I0925 18:30:06.939139   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHKeyPath
	I0925 18:30:06.939342   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHKeyPath
	I0925 18:30:06.939510   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHUsername
	I0925 18:30:06.939644   13872 main.go:141] libmachine: Using SSH client type: native
	I0925 18:30:06.939822   13872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.81 22 <nil> <nil>}
	I0925 18:30:06.939880   13872 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0925 18:30:07.070595   13872 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0925 18:30:07.070626   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHHostname
	I0925 18:30:07.073322   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:30:07.073748   13872 main.go:141] libmachine: (addons-608075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:fd:f4", ip: ""} in network mk-addons-608075: {Iface:virbr1 ExpiryTime:2024-09-25 19:29:58 +0000 UTC Type:0 Mac:52:54:00:0e:fd:f4 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-608075 Clientid:01:52:54:00:0e:fd:f4}
	I0925 18:30:07.073780   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined IP address 192.168.39.81 and MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:30:07.073972   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHPort
	I0925 18:30:07.074190   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHKeyPath
	I0925 18:30:07.074324   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHKeyPath
	I0925 18:30:07.074440   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHUsername
	I0925 18:30:07.074579   13872 main.go:141] libmachine: Using SSH client type: native
	I0925 18:30:07.074762   13872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.81 22 <nil> <nil>}
	I0925 18:30:07.074778   13872 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0925 18:30:08.885607   13872 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0925 18:30:08.885636   13872 main.go:141] libmachine: Checking connection to Docker...
	I0925 18:30:08.885646   13872 main.go:141] libmachine: (addons-608075) Calling .GetURL
	I0925 18:30:08.886805   13872 main.go:141] libmachine: (addons-608075) DBG | Using libvirt version 6000000
	I0925 18:30:08.888710   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:30:08.889074   13872 main.go:141] libmachine: (addons-608075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:fd:f4", ip: ""} in network mk-addons-608075: {Iface:virbr1 ExpiryTime:2024-09-25 19:29:58 +0000 UTC Type:0 Mac:52:54:00:0e:fd:f4 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-608075 Clientid:01:52:54:00:0e:fd:f4}
	I0925 18:30:08.889115   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined IP address 192.168.39.81 and MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:30:08.889279   13872 main.go:141] libmachine: Docker is up and running!
	I0925 18:30:08.889299   13872 main.go:141] libmachine: Reticulating splines...
	I0925 18:30:08.889306   13872 client.go:171] duration metric: took 26.46619597s to LocalClient.Create
	I0925 18:30:08.889331   13872 start.go:167] duration metric: took 26.466259114s to libmachine.API.Create "addons-608075"
	I0925 18:30:08.889342   13872 start.go:293] postStartSetup for "addons-608075" (driver="kvm2")
	I0925 18:30:08.889354   13872 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0925 18:30:08.889388   13872 main.go:141] libmachine: (addons-608075) Calling .DriverName
	I0925 18:30:08.889617   13872 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0925 18:30:08.889641   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHHostname
	I0925 18:30:08.891454   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:30:08.891718   13872 main.go:141] libmachine: (addons-608075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:fd:f4", ip: ""} in network mk-addons-608075: {Iface:virbr1 ExpiryTime:2024-09-25 19:29:58 +0000 UTC Type:0 Mac:52:54:00:0e:fd:f4 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-608075 Clientid:01:52:54:00:0e:fd:f4}
	I0925 18:30:08.891740   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined IP address 192.168.39.81 and MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:30:08.891845   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHPort
	I0925 18:30:08.892018   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHKeyPath
	I0925 18:30:08.892151   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHUsername
	I0925 18:30:08.892282   13872 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19681-6065/.minikube/machines/addons-608075/id_rsa Username:docker}
	I0925 18:30:08.980663   13872 ssh_runner.go:195] Run: cat /etc/os-release
	I0925 18:30:08.984963   13872 info.go:137] Remote host: Buildroot 2023.02.9
	I0925 18:30:08.984994   13872 filesync.go:126] Scanning /home/jenkins/minikube-integration/19681-6065/.minikube/addons for local assets ...
	I0925 18:30:08.985076   13872 filesync.go:126] Scanning /home/jenkins/minikube-integration/19681-6065/.minikube/files for local assets ...
	I0925 18:30:08.985099   13872 start.go:296] duration metric: took 95.750217ms for postStartSetup
	I0925 18:30:08.985133   13872 main.go:141] libmachine: (addons-608075) Calling .GetConfigRaw
	I0925 18:30:08.985730   13872 main.go:141] libmachine: (addons-608075) Calling .GetIP
	I0925 18:30:08.987840   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:30:08.988213   13872 main.go:141] libmachine: (addons-608075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:fd:f4", ip: ""} in network mk-addons-608075: {Iface:virbr1 ExpiryTime:2024-09-25 19:29:58 +0000 UTC Type:0 Mac:52:54:00:0e:fd:f4 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-608075 Clientid:01:52:54:00:0e:fd:f4}
	I0925 18:30:08.988237   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined IP address 192.168.39.81 and MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:30:08.988420   13872 profile.go:143] Saving config to /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/addons-608075/config.json ...
	I0925 18:30:08.988619   13872 start.go:128] duration metric: took 26.583390089s to createHost
	I0925 18:30:08.988645   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHHostname
	I0925 18:30:08.990760   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:30:08.991053   13872 main.go:141] libmachine: (addons-608075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:fd:f4", ip: ""} in network mk-addons-608075: {Iface:virbr1 ExpiryTime:2024-09-25 19:29:58 +0000 UTC Type:0 Mac:52:54:00:0e:fd:f4 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-608075 Clientid:01:52:54:00:0e:fd:f4}
	I0925 18:30:08.991090   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined IP address 192.168.39.81 and MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:30:08.991226   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHPort
	I0925 18:30:08.991389   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHKeyPath
	I0925 18:30:08.991549   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHKeyPath
	I0925 18:30:08.991638   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHUsername
	I0925 18:30:08.991777   13872 main.go:141] libmachine: Using SSH client type: native
	I0925 18:30:08.991932   13872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.81 22 <nil> <nil>}
	I0925 18:30:08.991942   13872 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0925 18:30:09.106480   13872 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727289009.080623322
	
	I0925 18:30:09.106500   13872 fix.go:216] guest clock: 1727289009.080623322
	I0925 18:30:09.106508   13872 fix.go:229] Guest: 2024-09-25 18:30:09.080623322 +0000 UTC Remote: 2024-09-25 18:30:08.98863274 +0000 UTC m=+26.684706032 (delta=91.990582ms)
	I0925 18:30:09.106527   13872 fix.go:200] guest clock delta is within tolerance: 91.990582ms
	I0925 18:30:09.106532   13872 start.go:83] releasing machines lock for "addons-608075", held for 26.701381715s
	I0925 18:30:09.106552   13872 main.go:141] libmachine: (addons-608075) Calling .DriverName
	I0925 18:30:09.106822   13872 main.go:141] libmachine: (addons-608075) Calling .GetIP
	I0925 18:30:09.109584   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:30:09.109922   13872 main.go:141] libmachine: (addons-608075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:fd:f4", ip: ""} in network mk-addons-608075: {Iface:virbr1 ExpiryTime:2024-09-25 19:29:58 +0000 UTC Type:0 Mac:52:54:00:0e:fd:f4 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-608075 Clientid:01:52:54:00:0e:fd:f4}
	I0925 18:30:09.109960   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined IP address 192.168.39.81 and MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:30:09.110106   13872 main.go:141] libmachine: (addons-608075) Calling .DriverName
	I0925 18:30:09.110601   13872 main.go:141] libmachine: (addons-608075) Calling .DriverName
	I0925 18:30:09.110790   13872 main.go:141] libmachine: (addons-608075) Calling .DriverName
	I0925 18:30:09.110881   13872 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0925 18:30:09.110935   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHHostname
	I0925 18:30:09.110950   13872 ssh_runner.go:195] Run: cat /version.json
	I0925 18:30:09.110964   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHHostname
	I0925 18:30:09.113473   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:30:09.113823   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:30:09.113854   13872 main.go:141] libmachine: (addons-608075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:fd:f4", ip: ""} in network mk-addons-608075: {Iface:virbr1 ExpiryTime:2024-09-25 19:29:58 +0000 UTC Type:0 Mac:52:54:00:0e:fd:f4 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-608075 Clientid:01:52:54:00:0e:fd:f4}
	I0925 18:30:09.113891   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined IP address 192.168.39.81 and MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:30:09.114025   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHPort
	I0925 18:30:09.114220   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHKeyPath
	I0925 18:30:09.114283   13872 main.go:141] libmachine: (addons-608075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:fd:f4", ip: ""} in network mk-addons-608075: {Iface:virbr1 ExpiryTime:2024-09-25 19:29:58 +0000 UTC Type:0 Mac:52:54:00:0e:fd:f4 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-608075 Clientid:01:52:54:00:0e:fd:f4}
	I0925 18:30:09.114308   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined IP address 192.168.39.81 and MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:30:09.114380   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHUsername
	I0925 18:30:09.114479   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHPort
	I0925 18:30:09.114545   13872 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19681-6065/.minikube/machines/addons-608075/id_rsa Username:docker}
	I0925 18:30:09.114626   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHKeyPath
	I0925 18:30:09.114768   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHUsername
	I0925 18:30:09.114912   13872 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19681-6065/.minikube/machines/addons-608075/id_rsa Username:docker}
	I0925 18:30:09.195411   13872 ssh_runner.go:195] Run: systemctl --version
	I0925 18:30:09.225797   13872 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0925 18:30:09.231861   13872 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0925 18:30:09.231926   13872 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0925 18:30:09.250638   13872 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0925 18:30:09.250670   13872 start.go:495] detecting cgroup driver to use...
	I0925 18:30:09.250799   13872 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0925 18:30:09.271803   13872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0925 18:30:09.284389   13872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0925 18:30:09.297077   13872 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0925 18:30:09.297158   13872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0925 18:30:09.309987   13872 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0925 18:30:09.322624   13872 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0925 18:30:09.334648   13872 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0925 18:30:09.346634   13872 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0925 18:30:09.359052   13872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0925 18:30:09.371329   13872 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0925 18:30:09.383573   13872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0925 18:30:09.396160   13872 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0925 18:30:09.407182   13872 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0925 18:30:09.407262   13872 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0925 18:30:09.418875   13872 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0925 18:30:09.429841   13872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 18:30:09.550429   13872 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0925 18:30:09.571267   13872 start.go:495] detecting cgroup driver to use...
	I0925 18:30:09.571357   13872 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0925 18:30:09.587808   13872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0925 18:30:09.609149   13872 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0925 18:30:09.627271   13872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0925 18:30:09.642919   13872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0925 18:30:09.658439   13872 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0925 18:30:09.866069   13872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0925 18:30:09.881659   13872 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0925 18:30:09.901107   13872 ssh_runner.go:195] Run: which cri-dockerd
	I0925 18:30:09.904970   13872 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0925 18:30:09.915050   13872 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0925 18:30:09.935750   13872 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0925 18:30:10.052211   13872 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0925 18:30:10.173143   13872 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0925 18:30:10.173282   13872 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0925 18:30:10.191378   13872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 18:30:10.304373   13872 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0925 18:30:12.673700   13872 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.369281474s)
	I0925 18:30:12.673778   13872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0925 18:30:12.687699   13872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0925 18:30:12.701123   13872 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0925 18:30:12.819310   13872 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0925 18:30:12.943913   13872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 18:30:13.071827   13872 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0925 18:30:13.089774   13872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0925 18:30:13.104080   13872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 18:30:13.230195   13872 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0925 18:30:13.312568   13872 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0925 18:30:13.312655   13872 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0925 18:30:13.319740   13872 start.go:563] Will wait 60s for crictl version
	I0925 18:30:13.319792   13872 ssh_runner.go:195] Run: which crictl
	I0925 18:30:13.323917   13872 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0925 18:30:13.362908   13872 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I0925 18:30:13.362968   13872 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0925 18:30:13.391157   13872 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0925 18:30:13.420841   13872 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	I0925 18:30:13.420888   13872 main.go:141] libmachine: (addons-608075) Calling .GetIP
	I0925 18:30:13.423617   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:30:13.423905   13872 main.go:141] libmachine: (addons-608075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:fd:f4", ip: ""} in network mk-addons-608075: {Iface:virbr1 ExpiryTime:2024-09-25 19:29:58 +0000 UTC Type:0 Mac:52:54:00:0e:fd:f4 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-608075 Clientid:01:52:54:00:0e:fd:f4}
	I0925 18:30:13.423929   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined IP address 192.168.39.81 and MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:30:13.424159   13872 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0925 18:30:13.428328   13872 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0925 18:30:13.441567   13872 kubeadm.go:883] updating cluster {Name:addons-608075 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-608075 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.81 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0925 18:30:13.441704   13872 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0925 18:30:13.441771   13872 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0925 18:30:13.458327   13872 docker.go:685] Got preloaded images: 
	I0925 18:30:13.458348   13872 docker.go:691] registry.k8s.io/kube-apiserver:v1.31.1 wasn't preloaded
	I0925 18:30:13.458406   13872 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0925 18:30:13.468500   13872 ssh_runner.go:195] Run: which lz4
	I0925 18:30:13.472497   13872 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0925 18:30:13.476618   13872 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0925 18:30:13.476649   13872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19681-6065/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (342028912 bytes)
	I0925 18:30:14.679198   13872 docker.go:649] duration metric: took 1.20672689s to copy over tarball
	I0925 18:30:14.679298   13872 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0925 18:30:16.658720   13872 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.979392705s)
	I0925 18:30:16.658750   13872 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0925 18:30:16.701451   13872 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0925 18:30:16.711728   13872 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0925 18:30:16.729186   13872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 18:30:16.853887   13872 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0925 18:30:19.451782   13872 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.597857032s)
	I0925 18:30:19.451873   13872 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0925 18:30:19.471114   13872 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0925 18:30:19.471161   13872 cache_images.go:84] Images are preloaded, skipping loading
	I0925 18:30:19.471171   13872 kubeadm.go:934] updating node { 192.168.39.81 8443 v1.31.1 docker true true} ...
	I0925 18:30:19.471269   13872 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-608075 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.81
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-608075 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0925 18:30:19.471336   13872 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0925 18:30:19.526864   13872 cni.go:84] Creating CNI manager for ""
	I0925 18:30:19.526894   13872 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 18:30:19.526903   13872 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0925 18:30:19.526922   13872 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.81 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-608075 NodeName:addons-608075 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.81"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.81 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0925 18:30:19.527045   13872 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.81
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-608075"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.81
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.81"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0925 18:30:19.527124   13872 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0925 18:30:19.537479   13872 binaries.go:44] Found k8s binaries, skipping transfer
	I0925 18:30:19.537551   13872 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0925 18:30:19.547925   13872 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0925 18:30:19.566119   13872 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0925 18:30:19.584771   13872 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0925 18:30:19.603122   13872 ssh_runner.go:195] Run: grep 192.168.39.81	control-plane.minikube.internal$ /etc/hosts
	I0925 18:30:19.607355   13872 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.81	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0925 18:30:19.620167   13872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 18:30:19.735633   13872 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0925 18:30:19.757056   13872 certs.go:68] Setting up /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/addons-608075 for IP: 192.168.39.81
	I0925 18:30:19.757081   13872 certs.go:194] generating shared ca certs ...
	I0925 18:30:19.757096   13872 certs.go:226] acquiring lock for ca certs: {Name:mkcaaeaa18f6cb2935bafec7eb00cf94369b9ca9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 18:30:19.757240   13872 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19681-6065/.minikube/ca.key
	I0925 18:30:20.029599   13872 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19681-6065/.minikube/ca.crt ...
	I0925 18:30:20.029635   13872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19681-6065/.minikube/ca.crt: {Name:mk120866a57306a13478906df980c7fe83c38824 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 18:30:20.029839   13872 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19681-6065/.minikube/ca.key ...
	I0925 18:30:20.029854   13872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19681-6065/.minikube/ca.key: {Name:mk25bd04e37ea79800faf4d362a93e84487327d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 18:30:20.029960   13872 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19681-6065/.minikube/proxy-client-ca.key
	I0925 18:30:20.163855   13872 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19681-6065/.minikube/proxy-client-ca.crt ...
	I0925 18:30:20.163888   13872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19681-6065/.minikube/proxy-client-ca.crt: {Name:mka40e24dc17d2530fd6c565a5882887c6918327 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 18:30:20.164094   13872 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19681-6065/.minikube/proxy-client-ca.key ...
	I0925 18:30:20.164110   13872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19681-6065/.minikube/proxy-client-ca.key: {Name:mkd1bb62055dbd45807f710600433473d7071ace Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 18:30:20.164212   13872 certs.go:256] generating profile certs ...
	I0925 18:30:20.164287   13872 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/addons-608075/client.key
	I0925 18:30:20.164306   13872 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/addons-608075/client.crt with IP's: []
	I0925 18:30:20.267145   13872 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/addons-608075/client.crt ...
	I0925 18:30:20.267180   13872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/addons-608075/client.crt: {Name:mk5e43aee9ca627d373aac465ab4e3eab749e8c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 18:30:20.267374   13872 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/addons-608075/client.key ...
	I0925 18:30:20.267390   13872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/addons-608075/client.key: {Name:mkcec51525be9f0e1b345d0261cbac2c29523ab3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 18:30:20.267459   13872 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/addons-608075/apiserver.key.024e2ee0
	I0925 18:30:20.267477   13872 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/addons-608075/apiserver.crt.024e2ee0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.81]
	I0925 18:30:20.410188   13872 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/addons-608075/apiserver.crt.024e2ee0 ...
	I0925 18:30:20.410219   13872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/addons-608075/apiserver.crt.024e2ee0: {Name:mkbf07072181b94d8eaa982959117d007b667049 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 18:30:20.410376   13872 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/addons-608075/apiserver.key.024e2ee0 ...
	I0925 18:30:20.410393   13872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/addons-608075/apiserver.key.024e2ee0: {Name:mk683cbd6b20fa0172a78b87fe4d9f690f2422d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 18:30:20.410466   13872 certs.go:381] copying /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/addons-608075/apiserver.crt.024e2ee0 -> /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/addons-608075/apiserver.crt
	I0925 18:30:20.410537   13872 certs.go:385] copying /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/addons-608075/apiserver.key.024e2ee0 -> /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/addons-608075/apiserver.key
	I0925 18:30:20.410581   13872 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/addons-608075/proxy-client.key
	I0925 18:30:20.410597   13872 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/addons-608075/proxy-client.crt with IP's: []
	I0925 18:30:20.569670   13872 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/addons-608075/proxy-client.crt ...
	I0925 18:30:20.569712   13872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/addons-608075/proxy-client.crt: {Name:mkd9aa838a193c10507b3fef11e8c9fc54465223 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 18:30:20.569935   13872 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/addons-608075/proxy-client.key ...
	I0925 18:30:20.569960   13872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/addons-608075/proxy-client.key: {Name:mkebd6d29d572855d4bf87a71d6d35de9f7c8fac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 18:30:20.570209   13872 certs.go:484] found cert: /home/jenkins/minikube-integration/19681-6065/.minikube/certs/ca-key.pem (1679 bytes)
	I0925 18:30:20.570254   13872 certs.go:484] found cert: /home/jenkins/minikube-integration/19681-6065/.minikube/certs/ca.pem (1082 bytes)
	I0925 18:30:20.570278   13872 certs.go:484] found cert: /home/jenkins/minikube-integration/19681-6065/.minikube/certs/cert.pem (1123 bytes)
	I0925 18:30:20.570302   13872 certs.go:484] found cert: /home/jenkins/minikube-integration/19681-6065/.minikube/certs/key.pem (1675 bytes)
	I0925 18:30:20.570831   13872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19681-6065/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0925 18:30:20.598809   13872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19681-6065/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0925 18:30:20.625427   13872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19681-6065/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0925 18:30:20.652227   13872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19681-6065/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0925 18:30:20.679371   13872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/addons-608075/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0925 18:30:20.705225   13872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/addons-608075/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0925 18:30:20.731663   13872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/addons-608075/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0925 18:30:20.759797   13872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/addons-608075/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0925 18:30:20.787558   13872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19681-6065/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0925 18:30:20.819587   13872 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0925 18:30:20.839056   13872 ssh_runner.go:195] Run: openssl version
	I0925 18:30:20.845836   13872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0925 18:30:20.857835   13872 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0925 18:30:20.862849   13872 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 25 18:30 /usr/share/ca-certificates/minikubeCA.pem
	I0925 18:30:20.862917   13872 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0925 18:30:20.869323   13872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0925 18:30:20.880976   13872 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0925 18:30:20.885476   13872 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0925 18:30:20.885525   13872 kubeadm.go:392] StartCluster: {Name:addons-608075 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-608075 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.81 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0925 18:30:20.885665   13872 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0925 18:30:20.904053   13872 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0925 18:30:20.914597   13872 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0925 18:30:20.924279   13872 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0925 18:30:20.934276   13872 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0925 18:30:20.934298   13872 kubeadm.go:157] found existing configuration files:
	
	I0925 18:30:20.934350   13872 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0925 18:30:20.943895   13872 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0925 18:30:20.943951   13872 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0925 18:30:20.954270   13872 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0925 18:30:20.964126   13872 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0925 18:30:20.964195   13872 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0925 18:30:20.974234   13872 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0925 18:30:20.983899   13872 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0925 18:30:20.983961   13872 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0925 18:30:20.994025   13872 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0925 18:30:21.003333   13872 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0925 18:30:21.003387   13872 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0925 18:30:21.013549   13872 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0925 18:30:21.068616   13872 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0925 18:30:21.068723   13872 kubeadm.go:310] [preflight] Running pre-flight checks
	I0925 18:30:21.166080   13872 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0925 18:30:21.166250   13872 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0925 18:30:21.166383   13872 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0925 18:30:21.189403   13872 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0925 18:30:21.192432   13872 out.go:235]   - Generating certificates and keys ...
	I0925 18:30:21.192522   13872 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0925 18:30:21.192586   13872 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0925 18:30:21.311790   13872 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0925 18:30:21.442546   13872 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0925 18:30:21.555114   13872 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0925 18:30:21.655920   13872 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0925 18:30:21.795770   13872 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0925 18:30:21.795958   13872 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-608075 localhost] and IPs [192.168.39.81 127.0.0.1 ::1]
	I0925 18:30:22.073581   13872 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0925 18:30:22.073743   13872 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-608075 localhost] and IPs [192.168.39.81 127.0.0.1 ::1]
	I0925 18:30:22.151702   13872 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0925 18:30:22.447659   13872 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0925 18:30:22.534386   13872 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0925 18:30:22.534458   13872 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0925 18:30:22.589967   13872 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0925 18:30:22.708225   13872 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0925 18:30:22.879420   13872 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0925 18:30:23.039444   13872 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0925 18:30:23.141170   13872 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0925 18:30:23.141731   13872 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0925 18:30:23.144258   13872 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0925 18:30:23.146291   13872 out.go:235]   - Booting up control plane ...
	I0925 18:30:23.146419   13872 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0925 18:30:23.146530   13872 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0925 18:30:23.148462   13872 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0925 18:30:23.165116   13872 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0925 18:30:23.172496   13872 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0925 18:30:23.172553   13872 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0925 18:30:23.295269   13872 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0925 18:30:23.295417   13872 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0925 18:30:23.796135   13872 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.136659ms
	I0925 18:30:23.796247   13872 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0925 18:30:29.297663   13872 kubeadm.go:310] [api-check] The API server is healthy after 5.502073163s
	I0925 18:30:29.310221   13872 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0925 18:30:29.326747   13872 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0925 18:30:29.357052   13872 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0925 18:30:29.357250   13872 kubeadm.go:310] [mark-control-plane] Marking the node addons-608075 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0925 18:30:29.370618   13872 kubeadm.go:310] [bootstrap-token] Using token: 1uso7e.0yag1z1b4bj2faw7
	I0925 18:30:29.372111   13872 out.go:235]   - Configuring RBAC rules ...
	I0925 18:30:29.372246   13872 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0925 18:30:29.383244   13872 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0925 18:30:29.396781   13872 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0925 18:30:29.401652   13872 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0925 18:30:29.406202   13872 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0925 18:30:29.417642   13872 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0925 18:30:29.711034   13872 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0925 18:30:30.191046   13872 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0925 18:30:30.705576   13872 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0925 18:30:30.706614   13872 kubeadm.go:310] 
	I0925 18:30:30.706710   13872 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0925 18:30:30.706722   13872 kubeadm.go:310] 
	I0925 18:30:30.706829   13872 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0925 18:30:30.706849   13872 kubeadm.go:310] 
	I0925 18:30:30.706890   13872 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0925 18:30:30.706983   13872 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0925 18:30:30.707055   13872 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0925 18:30:30.707076   13872 kubeadm.go:310] 
	I0925 18:30:30.707152   13872 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0925 18:30:30.707164   13872 kubeadm.go:310] 
	I0925 18:30:30.707227   13872 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0925 18:30:30.707236   13872 kubeadm.go:310] 
	I0925 18:30:30.707308   13872 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0925 18:30:30.707407   13872 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0925 18:30:30.707513   13872 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0925 18:30:30.707531   13872 kubeadm.go:310] 
	I0925 18:30:30.707608   13872 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0925 18:30:30.707673   13872 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0925 18:30:30.707679   13872 kubeadm.go:310] 
	I0925 18:30:30.707749   13872 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 1uso7e.0yag1z1b4bj2faw7 \
	I0925 18:30:30.707839   13872 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c1836a8c6a1d301c249dc4487193669d468d24aea5f9ce45f4ac26e28ee5e1dc \
	I0925 18:30:30.707858   13872 kubeadm.go:310] 	--control-plane 
	I0925 18:30:30.707864   13872 kubeadm.go:310] 
	I0925 18:30:30.707939   13872 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0925 18:30:30.707945   13872 kubeadm.go:310] 
	I0925 18:30:30.708019   13872 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 1uso7e.0yag1z1b4bj2faw7 \
	I0925 18:30:30.708109   13872 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c1836a8c6a1d301c249dc4487193669d468d24aea5f9ce45f4ac26e28ee5e1dc 
	I0925 18:30:30.709661   13872 kubeadm.go:310] W0925 18:30:21.041124    1512 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0925 18:30:30.710019   13872 kubeadm.go:310] W0925 18:30:21.042187    1512 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0925 18:30:30.710178   13872 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0925 18:30:30.710214   13872 cni.go:84] Creating CNI manager for ""
	I0925 18:30:30.710233   13872 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 18:30:30.712338   13872 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0925 18:30:30.714203   13872 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0925 18:30:30.724847   13872 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0925 18:30:30.742680   13872 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0925 18:30:30.742748   13872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 18:30:30.742802   13872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-608075 minikube.k8s.io/updated_at=2024_09_25T18_30_30_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=cb9e6220ecbd737c1d09ad9630c6f144f437664a minikube.k8s.io/name=addons-608075 minikube.k8s.io/primary=true
	I0925 18:30:30.756622   13872 ops.go:34] apiserver oom_adj: -16
	I0925 18:30:30.896066   13872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 18:30:31.396976   13872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 18:30:31.896284   13872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 18:30:32.396995   13872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 18:30:32.896162   13872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 18:30:33.397117   13872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 18:30:33.896302   13872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 18:30:34.397171   13872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 18:30:34.896211   13872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 18:30:35.018181   13872 kubeadm.go:1113] duration metric: took 4.275500197s to wait for elevateKubeSystemPrivileges
	I0925 18:30:35.018214   13872 kubeadm.go:394] duration metric: took 14.132691176s to StartCluster
	I0925 18:30:35.018234   13872 settings.go:142] acquiring lock: {Name:mk9bde6948ab123df4312062952fd0c0e2a76387 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 18:30:35.018369   13872 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19681-6065/kubeconfig
	I0925 18:30:35.018781   13872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19681-6065/kubeconfig: {Name:mkab1e262b9139f7dd2c759130b49ee75f662bc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 18:30:35.018982   13872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0925 18:30:35.018995   13872 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.81 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 18:30:35.019052   13872 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0925 18:30:35.019181   13872 addons.go:69] Setting yakd=true in profile "addons-608075"
	I0925 18:30:35.019195   13872 addons.go:69] Setting default-storageclass=true in profile "addons-608075"
	I0925 18:30:35.019201   13872 addons.go:69] Setting cloud-spanner=true in profile "addons-608075"
	I0925 18:30:35.019221   13872 addons.go:69] Setting storage-provisioner=true in profile "addons-608075"
	I0925 18:30:35.019223   13872 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-608075"
	I0925 18:30:35.019232   13872 addons.go:234] Setting addon storage-provisioner=true in "addons-608075"
	I0925 18:30:35.019240   13872 addons.go:234] Setting addon cloud-spanner=true in "addons-608075"
	I0925 18:30:35.019238   13872 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-608075"
	I0925 18:30:35.019268   13872 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-608075"
	I0925 18:30:35.019272   13872 host.go:66] Checking if "addons-608075" exists ...
	I0925 18:30:35.019273   13872 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-608075"
	I0925 18:30:35.019278   13872 config.go:182] Loaded profile config "addons-608075": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0925 18:30:35.019288   13872 addons.go:69] Setting volcano=true in profile "addons-608075"
	I0925 18:30:35.019298   13872 addons.go:234] Setting addon volcano=true in "addons-608075"
	I0925 18:30:35.019303   13872 host.go:66] Checking if "addons-608075" exists ...
	I0925 18:30:35.019280   13872 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-608075"
	I0925 18:30:35.019318   13872 host.go:66] Checking if "addons-608075" exists ...
	I0925 18:30:35.019646   13872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 18:30:35.019672   13872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 18:30:35.019699   13872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 18:30:35.019700   13872 addons.go:69] Setting volumesnapshots=true in profile "addons-608075"
	I0925 18:30:35.019712   13872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 18:30:35.019716   13872 addons.go:234] Setting addon volumesnapshots=true in "addons-608075"
	I0925 18:30:35.019213   13872 addons.go:69] Setting registry=true in profile "addons-608075"
	I0925 18:30:35.019204   13872 addons.go:234] Setting addon yakd=true in "addons-608075"
	I0925 18:30:35.019731   13872 addons.go:234] Setting addon registry=true in "addons-608075"
	I0925 18:30:35.019733   13872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 18:30:35.019745   13872 host.go:66] Checking if "addons-608075" exists ...
	I0925 18:30:35.019751   13872 host.go:66] Checking if "addons-608075" exists ...
	I0925 18:30:35.019750   13872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 18:30:35.019777   13872 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-608075"
	I0925 18:30:35.019791   13872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 18:30:35.019795   13872 addons.go:69] Setting inspektor-gadget=true in profile "addons-608075"
	I0925 18:30:35.019808   13872 addons.go:234] Setting addon inspektor-gadget=true in "addons-608075"
	I0925 18:30:35.019851   13872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 18:30:35.019878   13872 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-608075"
	I0925 18:30:35.019893   13872 addons.go:69] Setting metrics-server=true in profile "addons-608075"
	I0925 18:30:35.019914   13872 addons.go:234] Setting addon metrics-server=true in "addons-608075"
	I0925 18:30:35.019942   13872 host.go:66] Checking if "addons-608075" exists ...
	I0925 18:30:35.020095   13872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 18:30:35.020125   13872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 18:30:35.020184   13872 host.go:66] Checking if "addons-608075" exists ...
	I0925 18:30:35.020213   13872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 18:30:35.020235   13872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 18:30:35.020257   13872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 18:30:35.020267   13872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 18:30:35.019260   13872 host.go:66] Checking if "addons-608075" exists ...
	I0925 18:30:35.020339   13872 addons.go:69] Setting ingress=true in profile "addons-608075"
	I0925 18:30:35.020363   13872 addons.go:234] Setting addon ingress=true in "addons-608075"
	I0925 18:30:35.020367   13872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 18:30:35.020461   13872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 18:30:35.020523   13872 addons.go:69] Setting gcp-auth=true in profile "addons-608075"
	I0925 18:30:35.020549   13872 mustload.go:65] Loading cluster: addons-608075
	I0925 18:30:35.020562   13872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 18:30:35.020601   13872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 18:30:35.020663   13872 addons.go:69] Setting ingress-dns=true in profile "addons-608075"
	I0925 18:30:35.020682   13872 addons.go:234] Setting addon ingress-dns=true in "addons-608075"
	I0925 18:30:35.020717   13872 host.go:66] Checking if "addons-608075" exists ...
	I0925 18:30:35.020743   13872 config.go:182] Loaded profile config "addons-608075": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0925 18:30:35.020831   13872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 18:30:35.020874   13872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 18:30:35.020971   13872 host.go:66] Checking if "addons-608075" exists ...
	I0925 18:30:35.021125   13872 host.go:66] Checking if "addons-608075" exists ...
	I0925 18:30:35.021520   13872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 18:30:35.021599   13872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 18:30:35.021623   13872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 18:30:35.021659   13872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 18:30:35.021728   13872 out.go:177] * Verifying Kubernetes components...
	I0925 18:30:35.021999   13872 host.go:66] Checking if "addons-608075" exists ...
	I0925 18:30:35.042303   13872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35289
	I0925 18:30:35.042441   13872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32973
	I0925 18:30:35.042534   13872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 18:30:35.044906   13872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35193
	I0925 18:30:35.045079   13872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 18:30:35.045106   13872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 18:30:35.045273   13872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 18:30:35.045299   13872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 18:30:35.045312   13872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 18:30:35.045329   13872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 18:30:35.045855   13872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37783
	I0925 18:30:35.045966   13872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45431
	I0925 18:30:35.046027   13872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42745
	I0925 18:30:35.046097   13872 main.go:141] libmachine: () Calling .GetVersion
	I0925 18:30:35.046241   13872 main.go:141] libmachine: () Calling .GetVersion
	I0925 18:30:35.046517   13872 main.go:141] libmachine: () Calling .GetVersion
	I0925 18:30:35.046611   13872 main.go:141] libmachine: () Calling .GetVersion
	I0925 18:30:35.046841   13872 main.go:141] libmachine: Using API Version  1
	I0925 18:30:35.046856   13872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 18:30:35.046950   13872 main.go:141] libmachine: () Calling .GetVersion
	I0925 18:30:35.047117   13872 main.go:141] libmachine: Using API Version  1
	I0925 18:30:35.047129   13872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 18:30:35.047278   13872 main.go:141] libmachine: Using API Version  1
	I0925 18:30:35.047295   13872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 18:30:35.047424   13872 main.go:141] libmachine: Using API Version  1
	I0925 18:30:35.047439   13872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 18:30:35.047524   13872 main.go:141] libmachine: () Calling .GetVersion
	I0925 18:30:35.047585   13872 main.go:141] libmachine: () Calling .GetMachineName
	I0925 18:30:35.047627   13872 main.go:141] libmachine: () Calling .GetMachineName
	I0925 18:30:35.048412   13872 main.go:141] libmachine: Using API Version  1
	I0925 18:30:35.048429   13872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 18:30:35.048501   13872 main.go:141] libmachine: () Calling .GetMachineName
	I0925 18:30:35.048625   13872 main.go:141] libmachine: Using API Version  1
	I0925 18:30:35.048660   13872 main.go:141] libmachine: () Calling .GetMachineName
	I0925 18:30:35.048782   13872 main.go:141] libmachine: (addons-608075) Calling .GetState
	I0925 18:30:35.049206   13872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 18:30:35.049241   13872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 18:30:35.049412   13872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 18:30:35.049486   13872 main.go:141] libmachine: () Calling .GetMachineName
	I0925 18:30:35.049944   13872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 18:30:35.049988   13872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 18:30:35.050662   13872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 18:30:35.050697   13872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 18:30:35.051829   13872 main.go:141] libmachine: () Calling .GetMachineName
	I0925 18:30:35.054499   13872 addons.go:234] Setting addon default-storageclass=true in "addons-608075"
	I0925 18:30:35.054542   13872 host.go:66] Checking if "addons-608075" exists ...
	I0925 18:30:35.054976   13872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 18:30:35.054999   13872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 18:30:35.056206   13872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 18:30:35.056245   13872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 18:30:35.056288   13872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 18:30:35.056338   13872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 18:30:35.066785   13872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39711
	I0925 18:30:35.067679   13872 main.go:141] libmachine: () Calling .GetVersion
	I0925 18:30:35.068190   13872 main.go:141] libmachine: Using API Version  1
	I0925 18:30:35.068209   13872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 18:30:35.068601   13872 main.go:141] libmachine: () Calling .GetMachineName
	I0925 18:30:35.069287   13872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 18:30:35.069329   13872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 18:30:35.076514   13872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44213
	I0925 18:30:35.077233   13872 main.go:141] libmachine: () Calling .GetVersion
	I0925 18:30:35.077908   13872 main.go:141] libmachine: Using API Version  1
	I0925 18:30:35.077933   13872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 18:30:35.078159   13872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41147
	I0925 18:30:35.078446   13872 main.go:141] libmachine: () Calling .GetMachineName
	I0925 18:30:35.078565   13872 main.go:141] libmachine: () Calling .GetVersion
	I0925 18:30:35.078975   13872 main.go:141] libmachine: Using API Version  1
	I0925 18:30:35.078996   13872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 18:30:35.079050   13872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 18:30:35.079104   13872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 18:30:35.079375   13872 main.go:141] libmachine: () Calling .GetMachineName
	I0925 18:30:35.080664   13872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 18:30:35.080708   13872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 18:30:35.083206   13872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42165
	I0925 18:30:35.084045   13872 main.go:141] libmachine: () Calling .GetVersion
	I0925 18:30:35.084561   13872 main.go:141] libmachine: Using API Version  1
	I0925 18:30:35.084587   13872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 18:30:35.084900   13872 main.go:141] libmachine: () Calling .GetMachineName
	I0925 18:30:35.085029   13872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39955
	I0925 18:30:35.085210   13872 main.go:141] libmachine: (addons-608075) Calling .GetState
	I0925 18:30:35.085485   13872 main.go:141] libmachine: () Calling .GetVersion
	I0925 18:30:35.086470   13872 main.go:141] libmachine: Using API Version  1
	I0925 18:30:35.086493   13872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 18:30:35.087607   13872 main.go:141] libmachine: (addons-608075) Calling .DriverName
	I0925 18:30:35.089849   13872 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0925 18:30:35.091279   13872 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0925 18:30:35.091305   13872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0925 18:30:35.091329   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHHostname
	I0925 18:30:35.092031   13872 main.go:141] libmachine: () Calling .GetMachineName
	I0925 18:30:35.092741   13872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 18:30:35.092784   13872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 18:30:35.094816   13872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36497
	I0925 18:30:35.094878   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:30:35.095235   13872 main.go:141] libmachine: (addons-608075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:fd:f4", ip: ""} in network mk-addons-608075: {Iface:virbr1 ExpiryTime:2024-09-25 19:29:58 +0000 UTC Type:0 Mac:52:54:00:0e:fd:f4 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-608075 Clientid:01:52:54:00:0e:fd:f4}
	I0925 18:30:35.095273   13872 main.go:141] libmachine: () Calling .GetVersion
	I0925 18:30:35.095279   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined IP address 192.168.39.81 and MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:30:35.095508   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHPort
	I0925 18:30:35.095727   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHKeyPath
	I0925 18:30:35.095832   13872 main.go:141] libmachine: Using API Version  1
	I0925 18:30:35.095861   13872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 18:30:35.096052   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHUsername
	I0925 18:30:35.096196   13872 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19681-6065/.minikube/machines/addons-608075/id_rsa Username:docker}
	I0925 18:30:35.096386   13872 main.go:141] libmachine: () Calling .GetMachineName
	I0925 18:30:35.097048   13872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 18:30:35.097096   13872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 18:30:35.104417   13872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38335
	I0925 18:30:35.105109   13872 main.go:141] libmachine: () Calling .GetVersion
	I0925 18:30:35.105716   13872 main.go:141] libmachine: Using API Version  1
	I0925 18:30:35.105741   13872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 18:30:35.106107   13872 main.go:141] libmachine: () Calling .GetMachineName
	I0925 18:30:35.106307   13872 main.go:141] libmachine: (addons-608075) Calling .GetState
	I0925 18:30:35.108844   13872 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-608075"
	I0925 18:30:35.108889   13872 host.go:66] Checking if "addons-608075" exists ...
	I0925 18:30:35.109235   13872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 18:30:35.109265   13872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 18:30:35.111785   13872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44399
	I0925 18:30:35.112129   13872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41957
	I0925 18:30:35.112501   13872 main.go:141] libmachine: () Calling .GetVersion
	I0925 18:30:35.112947   13872 main.go:141] libmachine: () Calling .GetVersion
	I0925 18:30:35.113021   13872 main.go:141] libmachine: Using API Version  1
	I0925 18:30:35.113043   13872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 18:30:35.113431   13872 main.go:141] libmachine: Using API Version  1
	I0925 18:30:35.113431   13872 main.go:141] libmachine: () Calling .GetMachineName
	I0925 18:30:35.113452   13872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 18:30:35.113642   13872 main.go:141] libmachine: (addons-608075) Calling .GetState
	I0925 18:30:35.114226   13872 main.go:141] libmachine: () Calling .GetMachineName
	I0925 18:30:35.114695   13872 main.go:141] libmachine: (addons-608075) Calling .GetState
	I0925 18:30:35.116033   13872 host.go:66] Checking if "addons-608075" exists ...
	I0925 18:30:35.116837   13872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 18:30:35.116882   13872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 18:30:35.117116   13872 main.go:141] libmachine: (addons-608075) Calling .DriverName
	I0925 18:30:35.119180   13872 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0925 18:30:35.119574   13872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34127
	I0925 18:30:35.119947   13872 main.go:141] libmachine: () Calling .GetVersion
	I0925 18:30:35.120062   13872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46723
	I0925 18:30:35.120423   13872 main.go:141] libmachine: Using API Version  1
	I0925 18:30:35.120439   13872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 18:30:35.120538   13872 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0925 18:30:35.120551   13872 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0925 18:30:35.120567   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHHostname
	I0925 18:30:35.121187   13872 main.go:141] libmachine: () Calling .GetMachineName
	I0925 18:30:35.121726   13872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39173
	I0925 18:30:35.122161   13872 main.go:141] libmachine: () Calling .GetVersion
	I0925 18:30:35.122736   13872 main.go:141] libmachine: Using API Version  1
	I0925 18:30:35.122754   13872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 18:30:35.122809   13872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42517
	I0925 18:30:35.123231   13872 main.go:141] libmachine: () Calling .GetMachineName
	I0925 18:30:35.123824   13872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 18:30:35.123865   13872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 18:30:35.124083   13872 main.go:141] libmachine: (addons-608075) Calling .GetState
	I0925 18:30:35.124138   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:30:35.124848   13872 main.go:141] libmachine: () Calling .GetVersion
	I0925 18:30:35.124964   13872 main.go:141] libmachine: (addons-608075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:fd:f4", ip: ""} in network mk-addons-608075: {Iface:virbr1 ExpiryTime:2024-09-25 19:29:58 +0000 UTC Type:0 Mac:52:54:00:0e:fd:f4 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-608075 Clientid:01:52:54:00:0e:fd:f4}
	I0925 18:30:35.124984   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined IP address 192.168.39.81 and MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:30:35.125331   13872 main.go:141] libmachine: Using API Version  1
	I0925 18:30:35.125350   13872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 18:30:35.125618   13872 main.go:141] libmachine: () Calling .GetVersion
	I0925 18:30:35.125718   13872 main.go:141] libmachine: () Calling .GetMachineName
	I0925 18:30:35.126032   13872 main.go:141] libmachine: (addons-608075) Calling .GetState
	I0925 18:30:35.126033   13872 main.go:141] libmachine: (addons-608075) Calling .DriverName
	I0925 18:30:35.126399   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHPort
	I0925 18:30:35.126541   13872 main.go:141] libmachine: Using API Version  1
	I0925 18:30:35.126558   13872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 18:30:35.126583   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHKeyPath
	I0925 18:30:35.126755   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHUsername
	I0925 18:30:35.126900   13872 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19681-6065/.minikube/machines/addons-608075/id_rsa Username:docker}
	I0925 18:30:35.127366   13872 main.go:141] libmachine: () Calling .GetMachineName
	I0925 18:30:35.127534   13872 main.go:141] libmachine: (addons-608075) Calling .GetState
	I0925 18:30:35.128043   13872 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0925 18:30:35.128808   13872 main.go:141] libmachine: (addons-608075) Calling .DriverName
	I0925 18:30:35.129548   13872 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0925 18:30:35.129582   13872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0925 18:30:35.129588   13872 main.go:141] libmachine: (addons-608075) Calling .DriverName
	I0925 18:30:35.129599   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHHostname
	I0925 18:30:35.130873   13872 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0925 18:30:35.131672   13872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42119
	I0925 18:30:35.131728   13872 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0925 18:30:35.132320   13872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39401
	I0925 18:30:35.132609   13872 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0925 18:30:35.132626   13872 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0925 18:30:35.132655   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHHostname
	I0925 18:30:35.132845   13872 main.go:141] libmachine: () Calling .GetVersion
	I0925 18:30:35.133452   13872 main.go:141] libmachine: Using API Version  1
	I0925 18:30:35.133469   13872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 18:30:35.133743   13872 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0925 18:30:35.133763   13872 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0925 18:30:35.133781   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHHostname
	I0925 18:30:35.133872   13872 main.go:141] libmachine: () Calling .GetMachineName
	I0925 18:30:35.134455   13872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 18:30:35.134498   13872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 18:30:35.135208   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:30:35.135774   13872 main.go:141] libmachine: (addons-608075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:fd:f4", ip: ""} in network mk-addons-608075: {Iface:virbr1 ExpiryTime:2024-09-25 19:29:58 +0000 UTC Type:0 Mac:52:54:00:0e:fd:f4 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-608075 Clientid:01:52:54:00:0e:fd:f4}
	I0925 18:30:35.135803   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined IP address 192.168.39.81 and MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:30:35.135977   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHPort
	I0925 18:30:35.136126   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHKeyPath
	I0925 18:30:35.136296   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHUsername
	I0925 18:30:35.136479   13872 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19681-6065/.minikube/machines/addons-608075/id_rsa Username:docker}
	I0925 18:30:35.137806   13872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41373
	I0925 18:30:35.138005   13872 main.go:141] libmachine: () Calling .GetVersion
	I0925 18:30:35.138268   13872 main.go:141] libmachine: () Calling .GetVersion
	I0925 18:30:35.138450   13872 main.go:141] libmachine: Using API Version  1
	I0925 18:30:35.138469   13872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 18:30:35.139074   13872 main.go:141] libmachine: Using API Version  1
	I0925 18:30:35.139094   13872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 18:30:35.139292   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:30:35.139637   13872 main.go:141] libmachine: () Calling .GetMachineName
	I0925 18:30:35.139707   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:30:35.140227   13872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 18:30:35.140268   13872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 18:30:35.140530   13872 main.go:141] libmachine: () Calling .GetMachineName
	I0925 18:30:35.140599   13872 main.go:141] libmachine: (addons-608075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:fd:f4", ip: ""} in network mk-addons-608075: {Iface:virbr1 ExpiryTime:2024-09-25 19:29:58 +0000 UTC Type:0 Mac:52:54:00:0e:fd:f4 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-608075 Clientid:01:52:54:00:0e:fd:f4}
	I0925 18:30:35.140613   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined IP address 192.168.39.81 and MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:30:35.141201   13872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 18:30:35.141236   13872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 18:30:35.141473   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHPort
	I0925 18:30:35.141485   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHPort
	I0925 18:30:35.141546   13872 main.go:141] libmachine: (addons-608075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:fd:f4", ip: ""} in network mk-addons-608075: {Iface:virbr1 ExpiryTime:2024-09-25 19:29:58 +0000 UTC Type:0 Mac:52:54:00:0e:fd:f4 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-608075 Clientid:01:52:54:00:0e:fd:f4}
	I0925 18:30:35.141634   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined IP address 192.168.39.81 and MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:30:35.141722   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHKeyPath
	I0925 18:30:35.141774   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHKeyPath
	I0925 18:30:35.141945   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHUsername
	I0925 18:30:35.141967   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHUsername
	I0925 18:30:35.142143   13872 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19681-6065/.minikube/machines/addons-608075/id_rsa Username:docker}
	I0925 18:30:35.142156   13872 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19681-6065/.minikube/machines/addons-608075/id_rsa Username:docker}
	I0925 18:30:35.142705   13872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43761
	I0925 18:30:35.143143   13872 main.go:141] libmachine: () Calling .GetVersion
	I0925 18:30:35.144163   13872 main.go:141] libmachine: Using API Version  1
	I0925 18:30:35.144179   13872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 18:30:35.144705   13872 main.go:141] libmachine: () Calling .GetMachineName
	I0925 18:30:35.144884   13872 main.go:141] libmachine: (addons-608075) Calling .GetState
	I0925 18:30:35.146486   13872 main.go:141] libmachine: (addons-608075) Calling .DriverName
	I0925 18:30:35.148926   13872 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0925 18:30:35.149065   13872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42411
	I0925 18:30:35.150530   13872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45285
	I0925 18:30:35.150561   13872 main.go:141] libmachine: () Calling .GetVersion
	I0925 18:30:35.150612   13872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43077
	I0925 18:30:35.151195   13872 main.go:141] libmachine: () Calling .GetVersion
	I0925 18:30:35.151271   13872 main.go:141] libmachine: Using API Version  1
	I0925 18:30:35.151284   13872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 18:30:35.151338   13872 main.go:141] libmachine: () Calling .GetVersion
	I0925 18:30:35.151722   13872 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0925 18:30:35.151739   13872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0925 18:30:35.151756   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHHostname
	I0925 18:30:35.152282   13872 main.go:141] libmachine: () Calling .GetMachineName
	I0925 18:30:35.152370   13872 main.go:141] libmachine: Using API Version  1
	I0925 18:30:35.152386   13872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 18:30:35.152388   13872 main.go:141] libmachine: Using API Version  1
	I0925 18:30:35.152396   13872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 18:30:35.152681   13872 main.go:141] libmachine: (addons-608075) Calling .DriverName
	I0925 18:30:35.152722   13872 main.go:141] libmachine: () Calling .GetMachineName
	I0925 18:30:35.152938   13872 main.go:141] libmachine: (addons-608075) Calling .GetState
	I0925 18:30:35.153518   13872 main.go:141] libmachine: () Calling .GetMachineName
	I0925 18:30:35.153845   13872 main.go:141] libmachine: (addons-608075) Calling .GetState
	I0925 18:30:35.154889   13872 main.go:141] libmachine: (addons-608075) Calling .DriverName
	I0925 18:30:35.155889   13872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34629
	I0925 18:30:35.156315   13872 main.go:141] libmachine: () Calling .GetVersion
	I0925 18:30:35.156412   13872 main.go:141] libmachine: (addons-608075) Calling .DriverName
	I0925 18:30:35.156458   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:30:35.156680   13872 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.10.0
	I0925 18:30:35.156905   13872 main.go:141] libmachine: (addons-608075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:fd:f4", ip: ""} in network mk-addons-608075: {Iface:virbr1 ExpiryTime:2024-09-25 19:29:58 +0000 UTC Type:0 Mac:52:54:00:0e:fd:f4 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-608075 Clientid:01:52:54:00:0e:fd:f4}
	I0925 18:30:35.156919   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined IP address 192.168.39.81 and MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:30:35.157048   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHPort
	I0925 18:30:35.157179   13872 main.go:141] libmachine: Using API Version  1
	I0925 18:30:35.157191   13872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 18:30:35.157331   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHKeyPath
	I0925 18:30:35.157419   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHUsername
	I0925 18:30:35.157503   13872 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19681-6065/.minikube/machines/addons-608075/id_rsa Username:docker}
	I0925 18:30:35.157948   13872 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0925 18:30:35.158110   13872 main.go:141] libmachine: () Calling .GetMachineName
	I0925 18:30:35.158324   13872 main.go:141] libmachine: (addons-608075) Calling .GetState
	I0925 18:30:35.158818   13872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37661
	I0925 18:30:35.159193   13872 main.go:141] libmachine: () Calling .GetVersion
	I0925 18:30:35.159370   13872 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0925 18:30:35.159391   13872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0925 18:30:35.159412   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHHostname
	I0925 18:30:35.159467   13872 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.10.0
	I0925 18:30:35.160382   13872 main.go:141] libmachine: (addons-608075) Calling .DriverName
	I0925 18:30:35.160769   13872 main.go:141] libmachine: Using API Version  1
	I0925 18:30:35.160781   13872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 18:30:35.161297   13872 main.go:141] libmachine: () Calling .GetMachineName
	I0925 18:30:35.161815   13872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38495
	I0925 18:30:35.162022   13872 main.go:141] libmachine: (addons-608075) Calling .GetState
	I0925 18:30:35.162054   13872 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0925 18:30:35.162186   13872 main.go:141] libmachine: () Calling .GetVersion
	I0925 18:30:35.162972   13872 main.go:141] libmachine: Using API Version  1
	I0925 18:30:35.162988   13872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 18:30:35.163048   13872 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.10.0
	I0925 18:30:35.163381   13872 main.go:141] libmachine: () Calling .GetMachineName
	I0925 18:30:35.163614   13872 main.go:141] libmachine: (addons-608075) Calling .GetState
	I0925 18:30:35.164549   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:30:35.164948   13872 main.go:141] libmachine: (addons-608075) Calling .DriverName
	I0925 18:30:35.165117   13872 main.go:141] libmachine: (addons-608075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:fd:f4", ip: ""} in network mk-addons-608075: {Iface:virbr1 ExpiryTime:2024-09-25 19:29:58 +0000 UTC Type:0 Mac:52:54:00:0e:fd:f4 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-608075 Clientid:01:52:54:00:0e:fd:f4}
	I0925 18:30:35.165134   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined IP address 192.168.39.81 and MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:30:35.165265   13872 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0925 18:30:35.165281   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHPort
	I0925 18:30:35.165627   13872 main.go:141] libmachine: (addons-608075) Calling .DriverName
	I0925 18:30:35.165870   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHKeyPath
	I0925 18:30:35.166024   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHUsername
	I0925 18:30:35.166163   13872 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0925 18:30:35.166181   13872 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19681-6065/.minikube/machines/addons-608075/id_rsa Username:docker}
	I0925 18:30:35.166317   13872 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0925 18:30:35.166338   13872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (471825 bytes)
	I0925 18:30:35.166356   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHHostname
	I0925 18:30:35.167434   13872 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0925 18:30:35.167537   13872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42405
	I0925 18:30:35.168006   13872 main.go:141] libmachine: () Calling .GetVersion
	I0925 18:30:35.168607   13872 main.go:141] libmachine: Using API Version  1
	I0925 18:30:35.168636   13872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 18:30:35.168834   13872 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0925 18:30:35.168876   13872 out.go:177]   - Using image docker.io/registry:2.8.3
	I0925 18:30:35.169073   13872 main.go:141] libmachine: () Calling .GetMachineName
	I0925 18:30:35.169340   13872 main.go:141] libmachine: (addons-608075) Calling .GetState
	I0925 18:30:35.169408   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:30:35.170051   13872 main.go:141] libmachine: (addons-608075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:fd:f4", ip: ""} in network mk-addons-608075: {Iface:virbr1 ExpiryTime:2024-09-25 19:29:58 +0000 UTC Type:0 Mac:52:54:00:0e:fd:f4 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-608075 Clientid:01:52:54:00:0e:fd:f4}
	I0925 18:30:35.170078   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined IP address 192.168.39.81 and MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:30:35.170379   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHPort
	I0925 18:30:35.170538   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHKeyPath
	I0925 18:30:35.170664   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHUsername
	I0925 18:30:35.170832   13872 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0925 18:30:35.171002   13872 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0925 18:30:35.171014   13872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0925 18:30:35.171031   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHHostname
	I0925 18:30:35.171062   13872 main.go:141] libmachine: (addons-608075) Calling .DriverName
	I0925 18:30:35.170767   13872 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19681-6065/.minikube/machines/addons-608075/id_rsa Username:docker}
	I0925 18:30:35.172459   13872 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0925 18:30:35.172500   13872 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0925 18:30:35.173991   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:30:35.174132   13872 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0925 18:30:35.174406   13872 main.go:141] libmachine: (addons-608075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:fd:f4", ip: ""} in network mk-addons-608075: {Iface:virbr1 ExpiryTime:2024-09-25 19:29:58 +0000 UTC Type:0 Mac:52:54:00:0e:fd:f4 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-608075 Clientid:01:52:54:00:0e:fd:f4}
	I0925 18:30:35.174426   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined IP address 192.168.39.81 and MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:30:35.174613   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHPort
	I0925 18:30:35.174780   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHKeyPath
	I0925 18:30:35.174900   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHUsername
	I0925 18:30:35.174997   13872 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19681-6065/.minikube/machines/addons-608075/id_rsa Username:docker}
	I0925 18:30:35.175220   13872 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0925 18:30:35.175319   13872 out.go:177]   - Using image docker.io/busybox:stable
	I0925 18:30:35.175516   13872 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0925 18:30:35.175535   13872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0925 18:30:35.175551   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHHostname
	I0925 18:30:35.176991   13872 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0925 18:30:35.177005   13872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0925 18:30:35.177017   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHHostname
	I0925 18:30:35.178076   13872 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0925 18:30:35.179116   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:30:35.179306   13872 main.go:141] libmachine: (addons-608075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:fd:f4", ip: ""} in network mk-addons-608075: {Iface:virbr1 ExpiryTime:2024-09-25 19:29:58 +0000 UTC Type:0 Mac:52:54:00:0e:fd:f4 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-608075 Clientid:01:52:54:00:0e:fd:f4}
	I0925 18:30:35.179509   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHPort
	I0925 18:30:35.179331   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined IP address 192.168.39.81 and MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:30:35.179661   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHKeyPath
	I0925 18:30:35.180799   13872 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0925 18:30:35.181659   13872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34375
	I0925 18:30:35.181672   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHPort
	I0925 18:30:35.181735   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:30:35.181752   13872 main.go:141] libmachine: (addons-608075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:fd:f4", ip: ""} in network mk-addons-608075: {Iface:virbr1 ExpiryTime:2024-09-25 19:29:58 +0000 UTC Type:0 Mac:52:54:00:0e:fd:f4 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-608075 Clientid:01:52:54:00:0e:fd:f4}
	I0925 18:30:35.181768   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined IP address 192.168.39.81 and MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:30:35.181793   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHUsername
	I0925 18:30:35.181867   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHKeyPath
	I0925 18:30:35.181922   13872 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19681-6065/.minikube/machines/addons-608075/id_rsa Username:docker}
	I0925 18:30:35.182180   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHUsername
	I0925 18:30:35.182331   13872 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19681-6065/.minikube/machines/addons-608075/id_rsa Username:docker}
	I0925 18:30:35.182532   13872 main.go:141] libmachine: () Calling .GetVersion
	I0925 18:30:35.182915   13872 main.go:141] libmachine: Using API Version  1
	I0925 18:30:35.182931   13872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 18:30:35.183268   13872 main.go:141] libmachine: () Calling .GetMachineName
	I0925 18:30:35.183371   13872 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0925 18:30:35.183481   13872 main.go:141] libmachine: (addons-608075) Calling .GetState
	I0925 18:30:35.184390   13872 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0925 18:30:35.184412   13872 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0925 18:30:35.184424   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHHostname
	I0925 18:30:35.185202   13872 main.go:141] libmachine: (addons-608075) Calling .DriverName
	I0925 18:30:35.185636   13872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38085
	I0925 18:30:35.186284   13872 main.go:141] libmachine: () Calling .GetVersion
	I0925 18:30:35.186668   13872 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0925 18:30:35.186730   13872 main.go:141] libmachine: Using API Version  1
	I0925 18:30:35.186743   13872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 18:30:35.187099   13872 main.go:141] libmachine: () Calling .GetMachineName
	I0925 18:30:35.187316   13872 main.go:141] libmachine: (addons-608075) Calling .GetState
	I0925 18:30:35.188046   13872 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0925 18:30:35.188070   13872 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0925 18:30:35.188093   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHHostname
	I0925 18:30:35.188675   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:30:35.189125   13872 main.go:141] libmachine: (addons-608075) Calling .DriverName
	I0925 18:30:35.189309   13872 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0925 18:30:35.189318   13872 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0925 18:30:35.189328   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHHostname
	I0925 18:30:35.189476   13872 main.go:141] libmachine: (addons-608075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:fd:f4", ip: ""} in network mk-addons-608075: {Iface:virbr1 ExpiryTime:2024-09-25 19:29:58 +0000 UTC Type:0 Mac:52:54:00:0e:fd:f4 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-608075 Clientid:01:52:54:00:0e:fd:f4}
	I0925 18:30:35.189488   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined IP address 192.168.39.81 and MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:30:35.190490   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHPort
	I0925 18:30:35.190638   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHKeyPath
	I0925 18:30:35.190746   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHUsername
	I0925 18:30:35.191001   13872 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19681-6065/.minikube/machines/addons-608075/id_rsa Username:docker}
	I0925 18:30:35.191501   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:30:35.191896   13872 main.go:141] libmachine: (addons-608075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:fd:f4", ip: ""} in network mk-addons-608075: {Iface:virbr1 ExpiryTime:2024-09-25 19:29:58 +0000 UTC Type:0 Mac:52:54:00:0e:fd:f4 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-608075 Clientid:01:52:54:00:0e:fd:f4}
	I0925 18:30:35.191918   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined IP address 192.168.39.81 and MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:30:35.192075   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHPort
	I0925 18:30:35.192258   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHKeyPath
	I0925 18:30:35.192409   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHUsername
	I0925 18:30:35.192511   13872 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19681-6065/.minikube/machines/addons-608075/id_rsa Username:docker}
	I0925 18:30:35.192748   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:30:35.193294   13872 main.go:141] libmachine: (addons-608075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:fd:f4", ip: ""} in network mk-addons-608075: {Iface:virbr1 ExpiryTime:2024-09-25 19:29:58 +0000 UTC Type:0 Mac:52:54:00:0e:fd:f4 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-608075 Clientid:01:52:54:00:0e:fd:f4}
	I0925 18:30:35.193313   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined IP address 192.168.39.81 and MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:30:35.193438   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHPort
	I0925 18:30:35.193607   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHKeyPath
	I0925 18:30:35.193752   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHUsername
	I0925 18:30:35.193903   13872 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19681-6065/.minikube/machines/addons-608075/id_rsa Username:docker}
	W0925 18:30:35.208574   13872 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:39106->192.168.39.81:22: read: connection reset by peer
	I0925 18:30:35.208606   13872 retry.go:31] will retry after 271.409994ms: ssh: handshake failed: read tcp 192.168.39.1:39106->192.168.39.81:22: read: connection reset by peer
	W0925 18:30:35.208944   13872 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:39134->192.168.39.81:22: read: connection reset by peer
	I0925 18:30:35.208961   13872 retry.go:31] will retry after 364.480605ms: ssh: handshake failed: read tcp 192.168.39.1:39134->192.168.39.81:22: read: connection reset by peer
	W0925 18:30:35.209035   13872 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:39150->192.168.39.81:22: read: connection reset by peer
	I0925 18:30:35.209042   13872 retry.go:31] will retry after 187.471249ms: ssh: handshake failed: read tcp 192.168.39.1:39150->192.168.39.81:22: read: connection reset by peer
	W0925 18:30:35.397877   13872 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:39160->192.168.39.81:22: read: connection reset by peer
	I0925 18:30:35.397909   13872 retry.go:31] will retry after 340.841831ms: ssh: handshake failed: read tcp 192.168.39.1:39160->192.168.39.81:22: read: connection reset by peer
	I0925 18:30:35.636356   13872 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0925 18:30:35.636394   13872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0925 18:30:35.708732   13872 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0925 18:30:35.708760   13872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0925 18:30:35.730805   13872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0925 18:30:35.761309   13872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0925 18:30:35.853742   13872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0925 18:30:35.854109   13872 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0925 18:30:35.854125   13872 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0925 18:30:35.885730   13872 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0925 18:30:35.885756   13872 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0925 18:30:35.886800   13872 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0925 18:30:35.886819   13872 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0925 18:30:35.889806   13872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0925 18:30:35.908009   13872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0925 18:30:35.916643   13872 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0925 18:30:35.916672   13872 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0925 18:30:35.960710   13872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0925 18:30:36.054816   13872 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0925 18:30:36.054845   13872 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0925 18:30:36.083051   13872 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0925 18:30:36.083076   13872 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0925 18:30:36.098868   13872 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0925 18:30:36.098891   13872 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0925 18:30:36.104125   13872 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0925 18:30:36.104145   13872 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0925 18:30:36.159875   13872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0925 18:30:36.239869   13872 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0925 18:30:36.239891   13872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0925 18:30:36.533175   13872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0925 18:30:36.584914   13872 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0925 18:30:36.584941   13872 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0925 18:30:36.648934   13872 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0925 18:30:36.648963   13872 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0925 18:30:36.680166   13872 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0925 18:30:36.680195   13872 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0925 18:30:36.687982   13872 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0925 18:30:36.688010   13872 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0925 18:30:36.802774   13872 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0925 18:30:36.802799   13872 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0925 18:30:37.049858   13872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0925 18:30:37.213996   13872 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0925 18:30:37.214018   13872 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0925 18:30:37.224209   13872 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0925 18:30:37.224234   13872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0925 18:30:37.238110   13872 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0925 18:30:37.238139   13872 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0925 18:30:37.285798   13872 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0925 18:30:37.285821   13872 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0925 18:30:37.598240   13872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0925 18:30:37.636684   13872 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0925 18:30:37.636715   13872 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0925 18:30:37.705780   13872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0925 18:30:37.723428   13872 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0925 18:30:37.723460   13872 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0925 18:30:37.733941   13872 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0925 18:30:37.733962   13872 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0925 18:30:37.894487   13872 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0925 18:30:37.894510   13872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0925 18:30:37.998605   13872 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0925 18:30:37.998633   13872 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0925 18:30:38.001886   13872 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0925 18:30:38.001906   13872 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0925 18:30:38.189218   13872 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0925 18:30:38.189241   13872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0925 18:30:38.189485   13872 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0925 18:30:38.189506   13872 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0925 18:30:38.266401   13872 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0925 18:30:38.266431   13872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0925 18:30:38.423708   13872 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0925 18:30:38.423737   13872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0925 18:30:38.447554   13872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0925 18:30:38.504458   13872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0925 18:30:38.686971   13872 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0925 18:30:38.686999   13872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0925 18:30:38.930772   13872 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.294344754s)
	I0925 18:30:38.930803   13872 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.294411772s)
	I0925 18:30:38.930802   13872 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0925 18:30:38.931766   13872 node_ready.go:35] waiting up to 6m0s for node "addons-608075" to be "Ready" ...
	I0925 18:30:38.936091   13872 node_ready.go:49] node "addons-608075" has status "Ready":"True"
	I0925 18:30:38.936113   13872 node_ready.go:38] duration metric: took 4.324917ms for node "addons-608075" to be "Ready" ...
	I0925 18:30:38.936122   13872 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0925 18:30:38.950222   13872 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-k9mm2" in "kube-system" namespace to be "Ready" ...
	I0925 18:30:38.966266   13872 pod_ready.go:93] pod "coredns-7c65d6cfc9-k9mm2" in "kube-system" namespace has status "Ready":"True"
	I0925 18:30:38.966296   13872 pod_ready.go:82] duration metric: took 16.037805ms for pod "coredns-7c65d6cfc9-k9mm2" in "kube-system" namespace to be "Ready" ...
	I0925 18:30:38.966308   13872 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-vdh9n" in "kube-system" namespace to be "Ready" ...
	I0925 18:30:39.088661   13872 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0925 18:30:39.088690   13872 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0925 18:30:39.340391   13872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0925 18:30:39.434671   13872 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-608075" context rescaled to 1 replicas
	I0925 18:30:41.073863   13872 pod_ready.go:103] pod "coredns-7c65d6cfc9-vdh9n" in "kube-system" namespace has status "Ready":"False"
	I0925 18:30:41.787025   13872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.056161715s)
	I0925 18:30:41.787097   13872 main.go:141] libmachine: Making call to close driver server
	I0925 18:30:41.787110   13872 main.go:141] libmachine: (addons-608075) Calling .Close
	I0925 18:30:41.787392   13872 main.go:141] libmachine: Successfully made call to close driver server
	I0925 18:30:41.787417   13872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 18:30:41.787426   13872 main.go:141] libmachine: (addons-608075) DBG | Closing plugin on server side
	I0925 18:30:41.787444   13872 main.go:141] libmachine: Making call to close driver server
	I0925 18:30:41.787458   13872 main.go:141] libmachine: (addons-608075) Calling .Close
	I0925 18:30:41.787693   13872 main.go:141] libmachine: Successfully made call to close driver server
	I0925 18:30:41.787709   13872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 18:30:42.172080   13872 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0925 18:30:42.172126   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHHostname
	I0925 18:30:42.175444   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:30:42.175809   13872 main.go:141] libmachine: (addons-608075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:fd:f4", ip: ""} in network mk-addons-608075: {Iface:virbr1 ExpiryTime:2024-09-25 19:29:58 +0000 UTC Type:0 Mac:52:54:00:0e:fd:f4 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-608075 Clientid:01:52:54:00:0e:fd:f4}
	I0925 18:30:42.175837   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined IP address 192.168.39.81 and MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:30:42.176017   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHPort
	I0925 18:30:42.176194   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHKeyPath
	I0925 18:30:42.176411   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHUsername
	I0925 18:30:42.176556   13872 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19681-6065/.minikube/machines/addons-608075/id_rsa Username:docker}
	I0925 18:30:43.240681   13872 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0925 18:30:43.478997   13872 pod_ready.go:103] pod "coredns-7c65d6cfc9-vdh9n" in "kube-system" namespace has status "Ready":"False"
	I0925 18:30:43.802939   13872 addons.go:234] Setting addon gcp-auth=true in "addons-608075"
	I0925 18:30:43.803010   13872 host.go:66] Checking if "addons-608075" exists ...
	I0925 18:30:43.803546   13872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 18:30:43.803583   13872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 18:30:43.821245   13872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43873
	I0925 18:30:43.821769   13872 main.go:141] libmachine: () Calling .GetVersion
	I0925 18:30:43.822268   13872 main.go:141] libmachine: Using API Version  1
	I0925 18:30:43.822291   13872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 18:30:43.822637   13872 main.go:141] libmachine: () Calling .GetMachineName
	I0925 18:30:43.823246   13872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 18:30:43.823297   13872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 18:30:43.839617   13872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40871
	I0925 18:30:43.840112   13872 main.go:141] libmachine: () Calling .GetVersion
	I0925 18:30:43.840647   13872 main.go:141] libmachine: Using API Version  1
	I0925 18:30:43.840675   13872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 18:30:43.840970   13872 main.go:141] libmachine: () Calling .GetMachineName
	I0925 18:30:43.841194   13872 main.go:141] libmachine: (addons-608075) Calling .GetState
	I0925 18:30:43.842824   13872 main.go:141] libmachine: (addons-608075) Calling .DriverName
	I0925 18:30:43.843072   13872 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0925 18:30:43.843107   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHHostname
	I0925 18:30:43.845772   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:30:43.846277   13872 main.go:141] libmachine: (addons-608075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:fd:f4", ip: ""} in network mk-addons-608075: {Iface:virbr1 ExpiryTime:2024-09-25 19:29:58 +0000 UTC Type:0 Mac:52:54:00:0e:fd:f4 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-608075 Clientid:01:52:54:00:0e:fd:f4}
	I0925 18:30:43.846299   13872 main.go:141] libmachine: (addons-608075) DBG | domain addons-608075 has defined IP address 192.168.39.81 and MAC address 52:54:00:0e:fd:f4 in network mk-addons-608075
	I0925 18:30:43.846480   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHPort
	I0925 18:30:43.846686   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHKeyPath
	I0925 18:30:43.846848   13872 main.go:141] libmachine: (addons-608075) Calling .GetSSHUsername
	I0925 18:30:43.846986   13872 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19681-6065/.minikube/machines/addons-608075/id_rsa Username:docker}
	I0925 18:30:45.646524   13872 pod_ready.go:103] pod "coredns-7c65d6cfc9-vdh9n" in "kube-system" namespace has status "Ready":"False"
	I0925 18:30:46.488374   13872 pod_ready.go:98] pod "coredns-7c65d6cfc9-vdh9n" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-25 18:30:45 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-25 18:30:34 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-25 18:30:34 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-25 18:30:34 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-25 18:30:34 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.81 HostIPs:[{IP:192.168.39.
81}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-09-25 18:30:34 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-25 18:30:36 +0000 UTC,FinishedAt:2024-09-25 18:30:44 +0000 UTC,ContainerID:docker://97b52a0ae599171ca48b50a8274aa8de7dc2dbb0227ec37c239af467d68c3f8f,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://97b52a0ae599171ca48b50a8274aa8de7dc2dbb0227ec37c239af467d68c3f8f Started:0xc001b75320 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001d34f70} {Name:kube-api-access-hm9dw MountPath:/var/run/secrets/kubernetes.io/serviceaccou
nt ReadOnly:true RecursiveReadOnly:0xc001d34f80}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0925 18:30:46.488435   13872 pod_ready.go:82] duration metric: took 7.522118156s for pod "coredns-7c65d6cfc9-vdh9n" in "kube-system" namespace to be "Ready" ...
	E0925 18:30:46.488453   13872 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-vdh9n" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-25 18:30:45 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-25 18:30:34 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-25 18:30:34 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-25 18:30:34 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-25 18:30:34 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.81 HostIPs:[{IP:192.168.39.81}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-09-25 18:30:34 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-25 18:30:36 +0000 UTC,FinishedAt:2024-09-25 18:30:44 +0000 UTC,ContainerID:docker://97b52a0ae599171ca48b50a8274aa8de7dc2dbb0227ec37c239af467d68c3f8f,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://97b52a0ae599171ca48b50a8274aa8de7dc2dbb0227ec37c239af467d68c3f8f Started:0xc001b75320 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001d34f70} {Name:kube-api-access-hm9dw MountPath:/var/run/secre
ts/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc001d34f80}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0925 18:30:46.488469   13872 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-608075" in "kube-system" namespace to be "Ready" ...
	I0925 18:30:46.564159   13872 pod_ready.go:93] pod "etcd-addons-608075" in "kube-system" namespace has status "Ready":"True"
	I0925 18:30:46.564185   13872 pod_ready.go:82] duration metric: took 75.706519ms for pod "etcd-addons-608075" in "kube-system" namespace to be "Ready" ...
	I0925 18:30:46.564196   13872 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-608075" in "kube-system" namespace to be "Ready" ...
	I0925 18:30:46.621617   13872 pod_ready.go:93] pod "kube-apiserver-addons-608075" in "kube-system" namespace has status "Ready":"True"
	I0925 18:30:46.621649   13872 pod_ready.go:82] duration metric: took 57.446319ms for pod "kube-apiserver-addons-608075" in "kube-system" namespace to be "Ready" ...
	I0925 18:30:46.621666   13872 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-608075" in "kube-system" namespace to be "Ready" ...
	I0925 18:30:46.663036   13872 pod_ready.go:93] pod "kube-controller-manager-addons-608075" in "kube-system" namespace has status "Ready":"True"
	I0925 18:30:46.663058   13872 pod_ready.go:82] duration metric: took 41.385213ms for pod "kube-controller-manager-addons-608075" in "kube-system" namespace to be "Ready" ...
	I0925 18:30:46.663068   13872 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2lgvx" in "kube-system" namespace to be "Ready" ...
	I0925 18:30:46.707039   13872 pod_ready.go:93] pod "kube-proxy-2lgvx" in "kube-system" namespace has status "Ready":"True"
	I0925 18:30:46.707067   13872 pod_ready.go:82] duration metric: took 43.992047ms for pod "kube-proxy-2lgvx" in "kube-system" namespace to be "Ready" ...
	I0925 18:30:46.707077   13872 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-608075" in "kube-system" namespace to be "Ready" ...
	I0925 18:30:46.919087   13872 pod_ready.go:93] pod "kube-scheduler-addons-608075" in "kube-system" namespace has status "Ready":"True"
	I0925 18:30:46.919116   13872 pod_ready.go:82] duration metric: took 212.031425ms for pod "kube-scheduler-addons-608075" in "kube-system" namespace to be "Ready" ...
	I0925 18:30:46.919129   13872 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-zlw5z" in "kube-system" namespace to be "Ready" ...
	I0925 18:30:48.993143   13872 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-zlw5z" in "kube-system" namespace has status "Ready":"False"
	I0925 18:30:49.555922   13872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (13.794573339s)
	I0925 18:30:49.555972   13872 main.go:141] libmachine: Making call to close driver server
	I0925 18:30:49.555977   13872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (13.702190801s)
	I0925 18:30:49.555985   13872 main.go:141] libmachine: (addons-608075) Calling .Close
	I0925 18:30:49.555999   13872 main.go:141] libmachine: Making call to close driver server
	I0925 18:30:49.556014   13872 main.go:141] libmachine: (addons-608075) Calling .Close
	I0925 18:30:49.556162   13872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (13.666328168s)
	I0925 18:30:49.556197   13872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (13.648163819s)
	I0925 18:30:49.556230   13872 main.go:141] libmachine: Making call to close driver server
	I0925 18:30:49.556244   13872 main.go:141] libmachine: (addons-608075) Calling .Close
	I0925 18:30:49.556261   13872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (13.595507223s)
	I0925 18:30:49.556283   13872 main.go:141] libmachine: Making call to close driver server
	I0925 18:30:49.556297   13872 main.go:141] libmachine: (addons-608075) Calling .Close
	I0925 18:30:49.556304   13872 main.go:141] libmachine: (addons-608075) DBG | Closing plugin on server side
	I0925 18:30:49.556346   13872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (13.396442386s)
	I0925 18:30:49.556475   13872 main.go:141] libmachine: Making call to close driver server
	I0925 18:30:49.556506   13872 main.go:141] libmachine: (addons-608075) Calling .Close
	I0925 18:30:49.556525   13872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (11.958257218s)
	I0925 18:30:49.556551   13872 main.go:141] libmachine: Making call to close driver server
	I0925 18:30:49.556560   13872 main.go:141] libmachine: (addons-608075) Calling .Close
	I0925 18:30:49.556570   13872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (11.850753944s)
	I0925 18:30:49.556575   13872 main.go:141] libmachine: Successfully made call to close driver server
	I0925 18:30:49.556592   13872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 18:30:49.556594   13872 main.go:141] libmachine: Making call to close driver server
	I0925 18:30:49.556606   13872 main.go:141] libmachine: (addons-608075) Calling .Close
	I0925 18:30:49.556601   13872 main.go:141] libmachine: Making call to close driver server
	I0925 18:30:49.556619   13872 main.go:141] libmachine: (addons-608075) Calling .Close
	I0925 18:30:49.556675   13872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (11.109090472s)
	I0925 18:30:49.556695   13872 main.go:141] libmachine: Making call to close driver server
	I0925 18:30:49.556704   13872 main.go:141] libmachine: (addons-608075) Calling .Close
	I0925 18:30:49.556481   13872 main.go:141] libmachine: Successfully made call to close driver server
	I0925 18:30:49.556799   13872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 18:30:49.556811   13872 main.go:141] libmachine: Making call to close driver server
	I0925 18:30:49.556820   13872 main.go:141] libmachine: (addons-608075) Calling .Close
	I0925 18:30:49.556364   13872 main.go:141] libmachine: Successfully made call to close driver server
	I0925 18:30:49.556862   13872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 18:30:49.556871   13872 main.go:141] libmachine: Making call to close driver server
	I0925 18:30:49.556878   13872 main.go:141] libmachine: (addons-608075) Calling .Close
	I0925 18:30:49.556377   13872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (13.02317579s)
	I0925 18:30:49.556916   13872 main.go:141] libmachine: Making call to close driver server
	I0925 18:30:49.556923   13872 main.go:141] libmachine: (addons-608075) Calling .Close
	I0925 18:30:49.557008   13872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (11.05251586s)
	W0925 18:30:49.557037   13872 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0925 18:30:49.557059   13872 retry.go:31] will retry after 190.445932ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0925 18:30:49.557282   13872 main.go:141] libmachine: (addons-608075) DBG | Closing plugin on server side
	I0925 18:30:49.557302   13872 main.go:141] libmachine: (addons-608075) DBG | Closing plugin on server side
	I0925 18:30:49.557340   13872 main.go:141] libmachine: Successfully made call to close driver server
	I0925 18:30:49.557348   13872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 18:30:49.558839   13872 main.go:141] libmachine: Successfully made call to close driver server
	I0925 18:30:49.558851   13872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 18:30:49.556398   13872 main.go:141] libmachine: Successfully made call to close driver server
	I0925 18:30:49.558913   13872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 18:30:49.558921   13872 main.go:141] libmachine: Making call to close driver server
	I0925 18:30:49.556405   13872 main.go:141] libmachine: (addons-608075) DBG | Closing plugin on server side
	I0925 18:30:49.558935   13872 main.go:141] libmachine: (addons-608075) DBG | Closing plugin on server side
	I0925 18:30:49.558951   13872 main.go:141] libmachine: (addons-608075) Calling .Close
	I0925 18:30:49.556229   13872 main.go:141] libmachine: Making call to close driver server
	I0925 18:30:49.558968   13872 main.go:141] libmachine: (addons-608075) Calling .Close
	I0925 18:30:49.559001   13872 main.go:141] libmachine: Successfully made call to close driver server
	I0925 18:30:49.556448   13872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (12.506563514s)
	I0925 18:30:49.559008   13872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 18:30:49.559017   13872 main.go:141] libmachine: Making call to close driver server
	I0925 18:30:49.559021   13872 main.go:141] libmachine: Making call to close driver server
	I0925 18:30:49.559023   13872 main.go:141] libmachine: (addons-608075) Calling .Close
	I0925 18:30:49.559029   13872 main.go:141] libmachine: (addons-608075) Calling .Close
	I0925 18:30:49.556364   13872 main.go:141] libmachine: (addons-608075) DBG | Closing plugin on server side
	I0925 18:30:49.559112   13872 main.go:141] libmachine: (addons-608075) DBG | Closing plugin on server side
	I0925 18:30:49.559128   13872 main.go:141] libmachine: (addons-608075) DBG | Closing plugin on server side
	I0925 18:30:49.559146   13872 main.go:141] libmachine: Successfully made call to close driver server
	I0925 18:30:49.559152   13872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 18:30:49.559276   13872 main.go:141] libmachine: Successfully made call to close driver server
	I0925 18:30:49.559296   13872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 18:30:49.559312   13872 main.go:141] libmachine: Making call to close driver server
	I0925 18:30:49.559327   13872 main.go:141] libmachine: (addons-608075) Calling .Close
	I0925 18:30:49.559379   13872 main.go:141] libmachine: (addons-608075) DBG | Closing plugin on server side
	I0925 18:30:49.559447   13872 main.go:141] libmachine: Successfully made call to close driver server
	I0925 18:30:49.559465   13872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 18:30:49.559481   13872 main.go:141] libmachine: Making call to close driver server
	I0925 18:30:49.559497   13872 main.go:141] libmachine: (addons-608075) Calling .Close
	I0925 18:30:49.559542   13872 main.go:141] libmachine: (addons-608075) DBG | Closing plugin on server side
	I0925 18:30:49.559570   13872 main.go:141] libmachine: Successfully made call to close driver server
	I0925 18:30:49.559585   13872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 18:30:49.559945   13872 main.go:141] libmachine: Successfully made call to close driver server
	I0925 18:30:49.559962   13872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 18:30:49.559974   13872 main.go:141] libmachine: Making call to close driver server
	I0925 18:30:49.559981   13872 main.go:141] libmachine: (addons-608075) Calling .Close
	I0925 18:30:49.560094   13872 main.go:141] libmachine: (addons-608075) DBG | Closing plugin on server side
	I0925 18:30:49.560121   13872 main.go:141] libmachine: Successfully made call to close driver server
	I0925 18:30:49.560128   13872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 18:30:49.560137   13872 addons.go:475] Verifying addon ingress=true in "addons-608075"
	I0925 18:30:49.560297   13872 main.go:141] libmachine: (addons-608075) DBG | Closing plugin on server side
	I0925 18:30:49.560332   13872 main.go:141] libmachine: Successfully made call to close driver server
	I0925 18:30:49.560338   13872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 18:30:49.560345   13872 main.go:141] libmachine: Making call to close driver server
	I0925 18:30:49.560351   13872 main.go:141] libmachine: (addons-608075) Calling .Close
	I0925 18:30:49.560627   13872 main.go:141] libmachine: (addons-608075) DBG | Closing plugin on server side
	I0925 18:30:49.560649   13872 main.go:141] libmachine: Successfully made call to close driver server
	I0925 18:30:49.560655   13872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 18:30:49.560837   13872 main.go:141] libmachine: (addons-608075) DBG | Closing plugin on server side
	I0925 18:30:49.560856   13872 main.go:141] libmachine: Successfully made call to close driver server
	I0925 18:30:49.560862   13872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 18:30:49.561002   13872 main.go:141] libmachine: (addons-608075) DBG | Closing plugin on server side
	I0925 18:30:49.561030   13872 main.go:141] libmachine: Successfully made call to close driver server
	I0925 18:30:49.561036   13872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 18:30:49.561045   13872 addons.go:475] Verifying addon registry=true in "addons-608075"
	I0925 18:30:49.561238   13872 main.go:141] libmachine: (addons-608075) DBG | Closing plugin on server side
	I0925 18:30:49.561266   13872 main.go:141] libmachine: Successfully made call to close driver server
	I0925 18:30:49.561272   13872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 18:30:49.561279   13872 main.go:141] libmachine: Making call to close driver server
	I0925 18:30:49.561285   13872 main.go:141] libmachine: (addons-608075) Calling .Close
	I0925 18:30:49.561796   13872 main.go:141] libmachine: (addons-608075) DBG | Closing plugin on server side
	I0925 18:30:49.561831   13872 main.go:141] libmachine: Successfully made call to close driver server
	I0925 18:30:49.561838   13872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 18:30:49.561926   13872 main.go:141] libmachine: (addons-608075) DBG | Closing plugin on server side
	I0925 18:30:49.561943   13872 main.go:141] libmachine: (addons-608075) DBG | Closing plugin on server side
	I0925 18:30:49.561946   13872 main.go:141] libmachine: Successfully made call to close driver server
	I0925 18:30:49.561954   13872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 18:30:49.561961   13872 addons.go:475] Verifying addon metrics-server=true in "addons-608075"
	I0925 18:30:49.561986   13872 main.go:141] libmachine: Successfully made call to close driver server
	I0925 18:30:49.561992   13872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 18:30:49.562000   13872 main.go:141] libmachine: Making call to close driver server
	I0925 18:30:49.562007   13872 main.go:141] libmachine: (addons-608075) Calling .Close
	I0925 18:30:49.562561   13872 main.go:141] libmachine: (addons-608075) DBG | Closing plugin on server side
	I0925 18:30:49.562589   13872 main.go:141] libmachine: Successfully made call to close driver server
	I0925 18:30:49.562597   13872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 18:30:49.562895   13872 out.go:177] * Verifying registry addon...
	I0925 18:30:49.562912   13872 out.go:177] * Verifying ingress addon...
	I0925 18:30:49.562932   13872 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-608075 service yakd-dashboard -n yakd-dashboard
	
	I0925 18:30:49.565514   13872 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0925 18:30:49.566019   13872 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0925 18:30:49.723143   13872 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0925 18:30:49.723165   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:30:49.723404   13872 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0925 18:30:49.723421   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:30:49.747988   13872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0925 18:30:49.804922   13872 main.go:141] libmachine: Making call to close driver server
	I0925 18:30:49.804943   13872 main.go:141] libmachine: (addons-608075) Calling .Close
	I0925 18:30:49.805259   13872 main.go:141] libmachine: Successfully made call to close driver server
	I0925 18:30:49.805280   13872 main.go:141] libmachine: Making call to close connection to plugin binary
	W0925 18:30:49.805371   13872 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0925 18:30:49.816502   13872 main.go:141] libmachine: Making call to close driver server
	I0925 18:30:49.816529   13872 main.go:141] libmachine: (addons-608075) Calling .Close
	I0925 18:30:49.816811   13872 main.go:141] libmachine: Successfully made call to close driver server
	I0925 18:30:49.816826   13872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 18:30:49.816829   13872 main.go:141] libmachine: (addons-608075) DBG | Closing plugin on server side
	I0925 18:30:50.143306   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:30:50.143925   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:30:50.261781   13872 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (6.418679988s)
	I0925 18:30:50.262007   13872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (10.921557354s)
	I0925 18:30:50.262060   13872 main.go:141] libmachine: Making call to close driver server
	I0925 18:30:50.262076   13872 main.go:141] libmachine: (addons-608075) Calling .Close
	I0925 18:30:50.262320   13872 main.go:141] libmachine: (addons-608075) DBG | Closing plugin on server side
	I0925 18:30:50.262359   13872 main.go:141] libmachine: Successfully made call to close driver server
	I0925 18:30:50.262371   13872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 18:30:50.262388   13872 main.go:141] libmachine: Making call to close driver server
	I0925 18:30:50.262398   13872 main.go:141] libmachine: (addons-608075) Calling .Close
	I0925 18:30:50.262609   13872 main.go:141] libmachine: Successfully made call to close driver server
	I0925 18:30:50.262621   13872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 18:30:50.262640   13872 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-608075"
	I0925 18:30:50.263402   13872 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0925 18:30:50.264922   13872 out.go:177] * Verifying csi-hostpath-driver addon...
	I0925 18:30:50.267027   13872 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0925 18:30:50.267739   13872 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0925 18:30:50.269105   13872 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0925 18:30:50.269120   13872 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0925 18:30:50.320120   13872 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0925 18:30:50.320144   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:30:50.386106   13872 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0925 18:30:50.386144   13872 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0925 18:30:50.470926   13872 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0925 18:30:50.470952   13872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0925 18:30:50.570290   13872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0925 18:30:50.696807   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:30:50.697216   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:30:50.790194   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:30:51.074518   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:30:51.075207   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:30:51.272043   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:30:51.424622   13872 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-zlw5z" in "kube-system" namespace has status "Ready":"False"
	I0925 18:30:51.571874   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:30:51.572058   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:30:51.772333   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:30:52.074114   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:30:52.074519   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:30:52.111871   13872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.363830918s)
	I0925 18:30:52.111926   13872 main.go:141] libmachine: Making call to close driver server
	I0925 18:30:52.111944   13872 main.go:141] libmachine: (addons-608075) Calling .Close
	I0925 18:30:52.112207   13872 main.go:141] libmachine: Successfully made call to close driver server
	I0925 18:30:52.112225   13872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 18:30:52.112235   13872 main.go:141] libmachine: Making call to close driver server
	I0925 18:30:52.112244   13872 main.go:141] libmachine: (addons-608075) Calling .Close
	I0925 18:30:52.112253   13872 main.go:141] libmachine: (addons-608075) DBG | Closing plugin on server side
	I0925 18:30:52.112474   13872 main.go:141] libmachine: Successfully made call to close driver server
	I0925 18:30:52.112488   13872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 18:30:52.174932   13872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.604604526s)
	I0925 18:30:52.174992   13872 main.go:141] libmachine: Making call to close driver server
	I0925 18:30:52.175007   13872 main.go:141] libmachine: (addons-608075) Calling .Close
	I0925 18:30:52.175296   13872 main.go:141] libmachine: (addons-608075) DBG | Closing plugin on server side
	I0925 18:30:52.175358   13872 main.go:141] libmachine: Successfully made call to close driver server
	I0925 18:30:52.175378   13872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 18:30:52.175390   13872 main.go:141] libmachine: Making call to close driver server
	I0925 18:30:52.175400   13872 main.go:141] libmachine: (addons-608075) Calling .Close
	I0925 18:30:52.175610   13872 main.go:141] libmachine: (addons-608075) DBG | Closing plugin on server side
	I0925 18:30:52.175658   13872 main.go:141] libmachine: Successfully made call to close driver server
	I0925 18:30:52.175684   13872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 18:30:52.176755   13872 addons.go:475] Verifying addon gcp-auth=true in "addons-608075"
	I0925 18:30:52.178802   13872 out.go:177] * Verifying gcp-auth addon...
	I0925 18:30:52.181219   13872 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0925 18:30:52.193145   13872 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0925 18:30:52.293656   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:30:52.570198   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:30:52.570521   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:30:52.771749   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:30:53.071815   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:30:53.072108   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:30:53.272839   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:30:53.425929   13872 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-zlw5z" in "kube-system" namespace has status "Ready":"False"
	I0925 18:30:53.570112   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:30:53.571116   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:30:53.773333   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:30:54.069959   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:30:54.070567   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:30:54.272752   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:30:54.570517   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:30:54.570843   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:30:54.772836   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:30:55.070584   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:30:55.071076   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:30:55.273218   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:30:55.570352   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:30:55.570518   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:30:55.772931   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:30:55.924645   13872 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-zlw5z" in "kube-system" namespace has status "Ready":"False"
	I0925 18:30:56.072539   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:30:56.072712   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:30:56.273056   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:30:56.570815   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:30:56.571080   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:30:56.772687   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:30:57.070858   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:30:57.071014   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:30:57.272207   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:30:57.810413   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:30:57.810476   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:30:57.811163   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:30:57.924841   13872 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-zlw5z" in "kube-system" namespace has status "Ready":"False"
	I0925 18:30:58.069506   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:30:58.070293   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:30:58.273425   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:30:58.570449   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:30:58.570976   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:30:58.771930   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:30:58.924607   13872 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-zlw5z" in "kube-system" namespace has status "Ready":"True"
	I0925 18:30:58.924628   13872 pod_ready.go:82] duration metric: took 12.005485906s for pod "nvidia-device-plugin-daemonset-zlw5z" in "kube-system" namespace to be "Ready" ...
	I0925 18:30:58.924636   13872 pod_ready.go:39] duration metric: took 19.988501606s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0925 18:30:58.924655   13872 api_server.go:52] waiting for apiserver process to appear ...
	I0925 18:30:58.924712   13872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 18:30:58.947137   13872 api_server.go:72] duration metric: took 23.92810819s to wait for apiserver process to appear ...
	I0925 18:30:58.947167   13872 api_server.go:88] waiting for apiserver healthz status ...
	I0925 18:30:58.947185   13872 api_server.go:253] Checking apiserver healthz at https://192.168.39.81:8443/healthz ...
	I0925 18:30:58.953197   13872 api_server.go:279] https://192.168.39.81:8443/healthz returned 200:
	ok
	I0925 18:30:58.954208   13872 api_server.go:141] control plane version: v1.31.1
	I0925 18:30:58.954230   13872 api_server.go:131] duration metric: took 7.056526ms to wait for apiserver health ...
	I0925 18:30:58.954240   13872 system_pods.go:43] waiting for kube-system pods to appear ...
	I0925 18:30:58.962576   13872 system_pods.go:59] 17 kube-system pods found
	I0925 18:30:58.962608   13872 system_pods.go:61] "coredns-7c65d6cfc9-k9mm2" [7f170d35-8fd8-4ddf-af51-57634cb618a1] Running
	I0925 18:30:58.962619   13872 system_pods.go:61] "csi-hostpath-attacher-0" [6c9ec589-9ea0-4a60-b974-20e7f05fdd95] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0925 18:30:58.962627   13872 system_pods.go:61] "csi-hostpath-resizer-0" [abb3f221-1766-4a36-a0fa-8dd26d6cb45b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0925 18:30:58.962638   13872 system_pods.go:61] "csi-hostpathplugin-mn976" [90827f61-a2a1-4443-bcb2-b46d081c4081] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0925 18:30:58.962646   13872 system_pods.go:61] "etcd-addons-608075" [9ef95cb8-c1c2-4b5c-a52f-db9b99443b8b] Running
	I0925 18:30:58.962655   13872 system_pods.go:61] "kube-apiserver-addons-608075" [546133e9-bc45-4449-a805-54bd609557b2] Running
	I0925 18:30:58.962664   13872 system_pods.go:61] "kube-controller-manager-addons-608075" [abe4fcec-0b3d-47e1-a861-de3e66e00865] Running
	I0925 18:30:58.962674   13872 system_pods.go:61] "kube-ingress-dns-minikube" [a4843a3d-3900-479e-b9a6-9e3ad341cd54] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0925 18:30:58.962682   13872 system_pods.go:61] "kube-proxy-2lgvx" [23184ed9-3074-4a10-953b-8a9266f2364f] Running
	I0925 18:30:58.962687   13872 system_pods.go:61] "kube-scheduler-addons-608075" [fa24b25c-77ac-4d75-83ae-13bcd70cfabf] Running
	I0925 18:30:58.962694   13872 system_pods.go:61] "metrics-server-84c5f94fbc-4qj5z" [cc30322c-3e25-4337-8d51-fade7906b7f0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 18:30:58.962699   13872 system_pods.go:61] "nvidia-device-plugin-daemonset-zlw5z" [a7be19fd-ff5c-4be3-8829-7a262960e9b1] Running
	I0925 18:30:58.962708   13872 system_pods.go:61] "registry-66c9cd494c-jt9th" [2ec7da64-0a87-4bfe-a46b-b23794d946ae] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0925 18:30:58.962716   13872 system_pods.go:61] "registry-proxy-r2w4s" [3d389aa0-4f88-41fd-a0f0-8ee90b81a8a3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0925 18:30:58.962728   13872 system_pods.go:61] "snapshot-controller-56fcc65765-99zxd" [e8928774-bdaf-45cb-99af-8eb95e1090ec] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0925 18:30:58.962743   13872 system_pods.go:61] "snapshot-controller-56fcc65765-nbkd6" [a08b8824-43ad-47b3-b482-99ba444ed213] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0925 18:30:58.962750   13872 system_pods.go:61] "storage-provisioner" [b8adc80e-c307-4ff4-8a4d-12226ebf083a] Running
	I0925 18:30:58.962760   13872 system_pods.go:74] duration metric: took 8.512159ms to wait for pod list to return data ...
	I0925 18:30:58.962772   13872 default_sa.go:34] waiting for default service account to be created ...
	I0925 18:30:58.965669   13872 default_sa.go:45] found service account: "default"
	I0925 18:30:58.965691   13872 default_sa.go:55] duration metric: took 2.911183ms for default service account to be created ...
	I0925 18:30:58.965700   13872 system_pods.go:116] waiting for k8s-apps to be running ...
	I0925 18:30:58.972747   13872 system_pods.go:86] 17 kube-system pods found
	I0925 18:30:58.972782   13872 system_pods.go:89] "coredns-7c65d6cfc9-k9mm2" [7f170d35-8fd8-4ddf-af51-57634cb618a1] Running
	I0925 18:30:58.972794   13872 system_pods.go:89] "csi-hostpath-attacher-0" [6c9ec589-9ea0-4a60-b974-20e7f05fdd95] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0925 18:30:58.972804   13872 system_pods.go:89] "csi-hostpath-resizer-0" [abb3f221-1766-4a36-a0fa-8dd26d6cb45b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0925 18:30:58.972814   13872 system_pods.go:89] "csi-hostpathplugin-mn976" [90827f61-a2a1-4443-bcb2-b46d081c4081] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0925 18:30:58.972820   13872 system_pods.go:89] "etcd-addons-608075" [9ef95cb8-c1c2-4b5c-a52f-db9b99443b8b] Running
	I0925 18:30:58.972826   13872 system_pods.go:89] "kube-apiserver-addons-608075" [546133e9-bc45-4449-a805-54bd609557b2] Running
	I0925 18:30:58.972833   13872 system_pods.go:89] "kube-controller-manager-addons-608075" [abe4fcec-0b3d-47e1-a861-de3e66e00865] Running
	I0925 18:30:58.972844   13872 system_pods.go:89] "kube-ingress-dns-minikube" [a4843a3d-3900-479e-b9a6-9e3ad341cd54] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0925 18:30:58.972854   13872 system_pods.go:89] "kube-proxy-2lgvx" [23184ed9-3074-4a10-953b-8a9266f2364f] Running
	I0925 18:30:58.972860   13872 system_pods.go:89] "kube-scheduler-addons-608075" [fa24b25c-77ac-4d75-83ae-13bcd70cfabf] Running
	I0925 18:30:58.972869   13872 system_pods.go:89] "metrics-server-84c5f94fbc-4qj5z" [cc30322c-3e25-4337-8d51-fade7906b7f0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 18:30:58.972878   13872 system_pods.go:89] "nvidia-device-plugin-daemonset-zlw5z" [a7be19fd-ff5c-4be3-8829-7a262960e9b1] Running
	I0925 18:30:58.972886   13872 system_pods.go:89] "registry-66c9cd494c-jt9th" [2ec7da64-0a87-4bfe-a46b-b23794d946ae] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0925 18:30:58.972894   13872 system_pods.go:89] "registry-proxy-r2w4s" [3d389aa0-4f88-41fd-a0f0-8ee90b81a8a3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0925 18:30:58.972907   13872 system_pods.go:89] "snapshot-controller-56fcc65765-99zxd" [e8928774-bdaf-45cb-99af-8eb95e1090ec] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0925 18:30:58.972920   13872 system_pods.go:89] "snapshot-controller-56fcc65765-nbkd6" [a08b8824-43ad-47b3-b482-99ba444ed213] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0925 18:30:58.972927   13872 system_pods.go:89] "storage-provisioner" [b8adc80e-c307-4ff4-8a4d-12226ebf083a] Running
	I0925 18:30:58.972941   13872 system_pods.go:126] duration metric: took 7.233578ms to wait for k8s-apps to be running ...
	I0925 18:30:58.972954   13872 system_svc.go:44] waiting for kubelet service to be running ....
	I0925 18:30:58.973007   13872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0925 18:30:58.990369   13872 system_svc.go:56] duration metric: took 17.409462ms WaitForService to wait for kubelet
	I0925 18:30:58.990395   13872 kubeadm.go:582] duration metric: took 23.971372253s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0925 18:30:58.990412   13872 node_conditions.go:102] verifying NodePressure condition ...
	I0925 18:30:58.994533   13872 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0925 18:30:58.994556   13872 node_conditions.go:123] node cpu capacity is 2
	I0925 18:30:58.994569   13872 node_conditions.go:105] duration metric: took 4.153691ms to run NodePressure ...
	I0925 18:30:58.994580   13872 start.go:241] waiting for startup goroutines ...
	I0925 18:30:59.070798   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:30:59.071204   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:30:59.272910   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:30:59.570369   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:30:59.570717   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:30:59.772582   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:00.071666   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:31:00.071948   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:00.273138   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:00.570643   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:31:00.572187   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:00.773047   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:01.072013   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:31:01.072175   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:01.273127   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:01.571644   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:31:01.571798   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:01.772568   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:02.071169   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:31:02.071526   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:02.273377   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:02.570979   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:31:02.571646   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:02.772724   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:03.070724   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:31:03.072265   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:03.276221   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:03.571792   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:31:03.572161   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:03.771675   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:04.084744   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:31:04.085482   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:04.275077   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:04.581917   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:31:04.582629   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:04.772964   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:05.070071   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:31:05.071693   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:05.273166   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:05.568963   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:31:05.571118   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:05.773203   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:06.138112   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:31:06.138519   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:06.273205   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:06.570888   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:31:06.571751   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:06.772979   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:07.072759   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:31:07.072947   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:07.288381   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:07.571611   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:31:07.571886   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:07.772756   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:08.070375   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:31:08.071025   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:08.272873   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:08.570548   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:31:08.570701   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:08.772973   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:09.070997   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:31:09.071110   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:09.273512   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:09.570411   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:31:09.570877   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:09.773521   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:10.070836   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:31:10.070997   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:10.273452   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:10.571000   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:31:10.571601   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:10.787142   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:11.076984   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:31:11.077505   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:11.287740   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:11.569409   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:31:11.572399   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:11.772418   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:12.076502   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:31:12.077123   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:12.291564   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:12.570654   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:31:12.570963   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:12.795996   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:13.071177   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:31:13.071304   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:13.271971   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:13.569977   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:31:13.571063   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:13.772292   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:14.070564   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:31:14.070981   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:14.273485   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:14.570468   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:14.570525   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:31:14.774029   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:15.070861   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:31:15.070976   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:15.272423   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:15.570406   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:31:15.570552   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:15.772036   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:16.070852   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:16.071655   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:31:16.277275   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:16.568725   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:31:16.571490   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:16.943523   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:17.069803   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:31:17.070446   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:17.273819   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:17.570896   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:31:17.571331   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:17.995776   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:18.070284   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:31:18.070812   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:18.273299   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:18.569114   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:31:18.571005   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:18.772584   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:19.070683   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:31:19.070864   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:19.280547   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:19.576139   13872 kapi.go:107] duration metric: took 30.010623558s to wait for kubernetes.io/minikube-addons=registry ...
	I0925 18:31:19.576468   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:19.772649   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:20.070001   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:20.272962   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:20.571344   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:20.772731   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:21.070392   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:21.273107   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:21.570655   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:21.771709   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:22.070167   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:22.273443   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:22.570859   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:22.772714   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:23.071198   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:23.287633   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:23.679449   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:23.779662   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:24.071429   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:24.273182   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:24.570571   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:24.778290   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:25.072365   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:25.273279   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:25.570930   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:25.773636   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:26.071962   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:26.286774   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:26.572165   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:26.774807   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:27.071298   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:27.272537   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:27.575696   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:27.772805   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:28.070593   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:28.271878   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:28.569879   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:28.772509   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:29.070660   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:29.272435   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:29.570457   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:29.772497   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:30.071201   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:30.272454   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:30.570810   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:30.772339   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:31.256069   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:31.356053   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:31.572102   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:31.787654   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:32.070182   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:32.272903   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:32.570790   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:32.773878   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:33.072160   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:33.295724   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:33.571708   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:33.776310   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:34.070201   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:34.273087   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:34.571015   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:34.790829   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:35.076928   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:35.273695   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:35.571074   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:35.772219   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:36.069922   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:36.271739   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:36.570551   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:36.772421   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:37.070664   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:37.272032   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:37.572014   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:37.772596   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:38.071742   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:38.272648   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:38.571399   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:38.773151   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:39.071377   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:39.272199   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:39.571118   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:39.772795   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:40.071389   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:40.273611   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:40.571149   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:40.772619   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:41.072542   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:41.272280   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:41.571051   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:41.774107   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:42.070884   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:42.272203   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:42.570538   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:42.772319   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:43.074709   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:43.431418   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:43.570870   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:43.772711   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:44.071056   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:44.272351   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:44.593318   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:44.774409   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:45.079019   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:45.288097   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:45.571017   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:45.772474   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:46.071084   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:46.272344   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:46.571517   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:46.773986   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:47.070988   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:47.272296   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:47.571222   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:47.772455   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:48.071691   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:48.315413   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:48.582622   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:48.786928   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:49.071335   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:49.273154   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:49.571099   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:49.796936   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:50.070947   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:50.288432   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:50.571117   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:50.782325   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:51.070337   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:51.272366   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:51.572262   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:51.773656   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:52.070615   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:52.274232   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:52.571249   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:52.781105   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:53.070611   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:53.312904   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:53.572342   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:53.773406   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:54.071087   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:54.272592   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:54.570038   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:54.772164   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:55.071209   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:55.634472   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:55.637945   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:55.772913   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:56.070516   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:56.272685   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:56.570655   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:56.772358   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:57.071159   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:57.273705   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:57.570361   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:57.776497   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:58.071273   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:58.272414   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:58.911295   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:58.913443   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:59.071473   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:59.292264   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:59.578593   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:31:59.791449   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:32:00.071671   13872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 18:32:00.293156   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:32:00.571328   13872 kapi.go:107] duration metric: took 1m11.005307249s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0925 18:32:00.778870   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:32:01.272523   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:32:01.772860   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:32:02.275528   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:32:02.787959   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:32:03.273389   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:32:03.791468   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:32:04.273399   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:32:04.791306   13872 kapi.go:107] duration metric: took 1m14.523562427s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0925 18:32:14.201790   13872 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0925 18:32:14.201818   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:14.685671   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:15.187632   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:15.685655   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:16.185013   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:16.686265   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:17.185114   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:17.685919   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:18.186029   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:18.685352   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:19.186216   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:19.685085   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:20.186199   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:20.684481   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:21.185458   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:21.685379   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:22.184836   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:22.685451   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:23.185027   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:23.685931   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:24.185954   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:24.686081   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:25.184550   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:25.685051   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:26.185656   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:26.685291   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:27.184766   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:27.684980   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:28.185970   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:28.685445   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:29.184848   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:29.685238   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:30.185345   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:30.685484   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:31.185198   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:31.684826   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:32.185606   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:32.685264   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:33.184823   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:33.686879   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:34.186256   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:34.689671   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:35.191332   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:35.684562   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:36.186706   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:36.685246   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:37.187705   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:37.685289   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:38.185468   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:38.685090   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:39.185006   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:39.685932   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:40.187829   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:40.686221   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:41.185115   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:41.684392   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:42.185156   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:42.685452   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:43.185245   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:43.684690   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:44.185935   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:44.685623   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:45.185777   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:45.685826   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:46.185670   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:46.684853   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:47.185701   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:47.686187   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:48.185517   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:48.684994   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:49.186540   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:49.685282   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:50.185674   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:50.685712   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:51.185392   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:51.684910   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:52.186837   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:52.686199   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:53.185090   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:53.684696   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:54.186080   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:54.684770   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:55.194280   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:55.684827   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:56.193331   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:56.684541   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:57.185597   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:57.686546   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:58.187308   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:58.684708   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:59.186464   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:32:59.685351   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:33:00.185295   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:33:00.685084   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:33:01.185493   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:33:01.685321   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:33:02.186306   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:33:02.685627   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:33:03.185241   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:33:03.685441   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:33:04.185871   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:33:04.685284   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:33:05.185508   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:33:05.685415   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:33:06.185445   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:33:06.686250   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:33:07.184867   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:33:07.685833   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:33:08.186669   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:33:08.685793   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:33:09.185817   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:33:09.685540   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:33:10.185650   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:33:10.685746   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:33:11.185135   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:33:11.685537   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:33:12.185722   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:33:12.685493   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:33:13.185476   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:33:13.685548   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:33:14.186176   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:33:14.685142   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:33:15.185779   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:33:15.685771   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:33:16.185885   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:33:16.685709   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:33:17.185194   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:33:17.686231   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:33:18.185104   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:33:18.684874   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:33:19.185789   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:33:19.685431   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:33:20.186003   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:33:20.685586   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:33:21.184913   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:33:21.689859   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:33:22.185086   13872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:33:22.685959   13872 kapi.go:107] duration metric: took 2m30.50473366s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0925 18:33:22.688173   13872 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-608075 cluster.
	I0925 18:33:22.689902   13872 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0925 18:33:22.691387   13872 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0925 18:33:22.692887   13872 out.go:177] * Enabled addons: storage-provisioner, cloud-spanner, nvidia-device-plugin, ingress-dns, volcano, inspektor-gadget, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0925 18:33:22.694310   13872 addons.go:510] duration metric: took 2m47.675264466s for enable addons: enabled=[storage-provisioner cloud-spanner nvidia-device-plugin ingress-dns volcano inspektor-gadget metrics-server yakd default-storageclass volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0925 18:33:22.694366   13872 start.go:246] waiting for cluster config update ...
	I0925 18:33:22.694392   13872 start.go:255] writing updated cluster config ...
	I0925 18:33:22.694746   13872 ssh_runner.go:195] Run: rm -f paused
	I0925 18:33:22.750559   13872 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0925 18:33:22.752092   13872 out.go:177] * Done! kubectl is now configured to use "addons-608075" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 25 18:43:11 addons-608075 dockerd[1202]: time="2024-09-25T18:43:11.673286539Z" level=warning msg="cleaning up after shim disconnected" id=a9a963222bc2b07058982d1602df041df2ffcd8cdde9c378b11f14377a186e15 namespace=moby
	Sep 25 18:43:11 addons-608075 dockerd[1202]: time="2024-09-25T18:43:11.674051253Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 25 18:43:19 addons-608075 dockerd[1195]: time="2024-09-25T18:43:19.432075836Z" level=info msg="ignoring event" container=fe58888f88053376dd3b08f91d95e80984ba96187e494687fd0006be1de17a8f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 25 18:43:19 addons-608075 dockerd[1202]: time="2024-09-25T18:43:19.433177140Z" level=info msg="shim disconnected" id=fe58888f88053376dd3b08f91d95e80984ba96187e494687fd0006be1de17a8f namespace=moby
	Sep 25 18:43:19 addons-608075 dockerd[1202]: time="2024-09-25T18:43:19.433580424Z" level=warning msg="cleaning up after shim disconnected" id=fe58888f88053376dd3b08f91d95e80984ba96187e494687fd0006be1de17a8f namespace=moby
	Sep 25 18:43:19 addons-608075 dockerd[1202]: time="2024-09-25T18:43:19.434307895Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 25 18:43:19 addons-608075 dockerd[1195]: time="2024-09-25T18:43:19.899078644Z" level=info msg="ignoring event" container=1f2c79d43bf3b8dcff9a779374b73ed8ac97669b70f4198c70aba492dc7afe52 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 25 18:43:19 addons-608075 dockerd[1202]: time="2024-09-25T18:43:19.899909796Z" level=info msg="shim disconnected" id=1f2c79d43bf3b8dcff9a779374b73ed8ac97669b70f4198c70aba492dc7afe52 namespace=moby
	Sep 25 18:43:19 addons-608075 dockerd[1202]: time="2024-09-25T18:43:19.900430749Z" level=warning msg="cleaning up after shim disconnected" id=1f2c79d43bf3b8dcff9a779374b73ed8ac97669b70f4198c70aba492dc7afe52 namespace=moby
	Sep 25 18:43:19 addons-608075 dockerd[1202]: time="2024-09-25T18:43:19.900611329Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 25 18:43:19 addons-608075 dockerd[1195]: time="2024-09-25T18:43:19.996723343Z" level=info msg="ignoring event" container=fa48e8b28e3491eae69824f42669257b36d968cf511719756ba0bab987a493b1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 25 18:43:19 addons-608075 dockerd[1202]: time="2024-09-25T18:43:19.999593761Z" level=info msg="shim disconnected" id=fa48e8b28e3491eae69824f42669257b36d968cf511719756ba0bab987a493b1 namespace=moby
	Sep 25 18:43:19 addons-608075 dockerd[1202]: time="2024-09-25T18:43:19.999785555Z" level=warning msg="cleaning up after shim disconnected" id=fa48e8b28e3491eae69824f42669257b36d968cf511719756ba0bab987a493b1 namespace=moby
	Sep 25 18:43:19 addons-608075 dockerd[1202]: time="2024-09-25T18:43:19.999909378Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 25 18:43:20 addons-608075 dockerd[1195]: time="2024-09-25T18:43:20.126687767Z" level=info msg="ignoring event" container=29f5513867a1168c7a5905f08c6c1a47a000d2bf94dec74779eb278dac010537 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 25 18:43:20 addons-608075 dockerd[1202]: time="2024-09-25T18:43:20.126933993Z" level=info msg="shim disconnected" id=29f5513867a1168c7a5905f08c6c1a47a000d2bf94dec74779eb278dac010537 namespace=moby
	Sep 25 18:43:20 addons-608075 dockerd[1202]: time="2024-09-25T18:43:20.127153115Z" level=warning msg="cleaning up after shim disconnected" id=29f5513867a1168c7a5905f08c6c1a47a000d2bf94dec74779eb278dac010537 namespace=moby
	Sep 25 18:43:20 addons-608075 dockerd[1202]: time="2024-09-25T18:43:20.127167081Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 25 18:43:20 addons-608075 cri-dockerd[1093]: time="2024-09-25T18:43:20Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"registry-proxy-r2w4s_kube-system\": unexpected command output nsenter: cannot open /proc/3146/ns/net: No such file or directory\n with error: exit status 1"
	Sep 25 18:43:20 addons-608075 dockerd[1195]: time="2024-09-25T18:43:20.299878374Z" level=info msg="ignoring event" container=35402ba3db5205c69ece424bf40332eabe4bef7ea4760025c7556bff4a66f1ac module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 25 18:43:20 addons-608075 dockerd[1202]: time="2024-09-25T18:43:20.312544247Z" level=info msg="shim disconnected" id=35402ba3db5205c69ece424bf40332eabe4bef7ea4760025c7556bff4a66f1ac namespace=moby
	Sep 25 18:43:20 addons-608075 dockerd[1202]: time="2024-09-25T18:43:20.312867130Z" level=warning msg="cleaning up after shim disconnected" id=35402ba3db5205c69ece424bf40332eabe4bef7ea4760025c7556bff4a66f1ac namespace=moby
	Sep 25 18:43:20 addons-608075 dockerd[1202]: time="2024-09-25T18:43:20.312903637Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 25 18:43:20 addons-608075 cri-dockerd[1093]: time="2024-09-25T18:43:20Z" level=error msg="error getting RW layer size for container ID '1f2c79d43bf3b8dcff9a779374b73ed8ac97669b70f4198c70aba492dc7afe52': Error response from daemon: No such container: 1f2c79d43bf3b8dcff9a779374b73ed8ac97669b70f4198c70aba492dc7afe52"
	Sep 25 18:43:20 addons-608075 cri-dockerd[1093]: time="2024-09-25T18:43:20Z" level=error msg="Set backoffDuration to : 1m0s for container ID '1f2c79d43bf3b8dcff9a779374b73ed8ac97669b70f4198c70aba492dc7afe52'"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	56f76d386c67b       a416a98b71e22                                                                                                                32 seconds ago      Exited              helper-pod                0                   3913078e925de       helper-pod-delete-pvc-268deaac-a2f5-491e-8565-4ab0e1112f3d
	4e6cc9cf92c59       busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79                                              40 seconds ago      Exited              helper-pod                0                   ce8b4ebe02fe2       helper-pod-create-pvc-268deaac-a2f5-491e-8565-4ab0e1112f3d
	137d3aeb75c99       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                                  48 seconds ago      Running             hello-world-app           0                   88c1f9e255603       hello-world-app-55bf9c44b4-4dnbt
	86934eab7e72e       nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf                                                58 seconds ago      Running             nginx                     0                   af0f40d233d42       nginx
	2d6958de95b91       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 10 minutes ago      Running             gcp-auth                  0                   03544b5cee9f4       gcp-auth-89d5ffd79-zn6tn
	8b4486f93ae54       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              patch                     0                   b0ef4fdc4bb91       ingress-nginx-admission-patch-9qcq2
	86fc008272cc9       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              create                    0                   703b306b0ac87       ingress-nginx-admission-create-7lvtb
	c3434c9f19499       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                       12 minutes ago      Running             local-path-provisioner    0                   0e9c87f0f56e0       local-path-provisioner-86d989889c-lvskx
	fa48e8b28e349       gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367              12 minutes ago      Exited              registry-proxy            0                   35402ba3db520       registry-proxy-r2w4s
	b8f8e5ae00000       6e38f40d628db                                                                                                                12 minutes ago      Running             storage-provisioner       0                   f70cd7668a313       storage-provisioner
	3e375d3450018       c69fa2e9cbf5f                                                                                                                12 minutes ago      Running             coredns                   0                   393a78f825373       coredns-7c65d6cfc9-k9mm2
	19847d7bf61bf       60c005f310ff3                                                                                                                12 minutes ago      Running             kube-proxy                0                   74da90012a606       kube-proxy-2lgvx
	bde32e408ce11       6bab7719df100                                                                                                                12 minutes ago      Running             kube-apiserver            0                   44040674c7883       kube-apiserver-addons-608075
	19d5df7def7a7       9aa1fad941575                                                                                                                12 minutes ago      Running             kube-scheduler            0                   c3668243e3d2d       kube-scheduler-addons-608075
	82e7faa1a5f77       2e96e5913fc06                                                                                                                12 minutes ago      Running             etcd                      0                   9e6fe2988804e       etcd-addons-608075
	d50d79b640065       175ffd71cce3d                                                                                                                12 minutes ago      Running             kube-controller-manager   0                   6eb4804ef6d0d       kube-controller-manager-addons-608075
	
	
	==> coredns [3e375d345001] <==
	[INFO] Reloading complete
	[INFO] 10.244.0.6:59806 - 55376 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000355025s
	[INFO] 10.244.0.6:59806 - 6492 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000159417s
	[INFO] 10.244.0.6:44912 - 38429 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000125104s
	[INFO] 10.244.0.6:44912 - 17688 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000057415s
	[INFO] 10.244.0.6:44368 - 46820 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000143725s
	[INFO] 10.244.0.6:44368 - 32736 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000081138s
	[INFO] 10.244.0.6:43128 - 2012 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00013686s
	[INFO] 10.244.0.6:43128 - 14303 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000109524s
	[INFO] 10.244.0.6:45654 - 56131 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000138259s
	[INFO] 10.244.0.6:45654 - 1351 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000051577s
	[INFO] 10.244.0.6:58787 - 63373 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000079692s
	[INFO] 10.244.0.6:58787 - 41359 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000053584s
	[INFO] 10.244.0.6:57725 - 3190 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000064s
	[INFO] 10.244.0.6:57725 - 10856 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000029243s
	[INFO] 10.244.0.6:48544 - 53952 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00005884s
	[INFO] 10.244.0.6:48544 - 53190 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000083332s
	[INFO] 10.244.0.25:57878 - 60212 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.001325479s
	[INFO] 10.244.0.25:39944 - 4107 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000836778s
	[INFO] 10.244.0.25:48200 - 23688 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000176897s
	[INFO] 10.244.0.25:47557 - 52263 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000085672s
	[INFO] 10.244.0.25:47170 - 36409 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000090749s
	[INFO] 10.244.0.25:60551 - 46904 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000123999s
	[INFO] 10.244.0.25:41814 - 16272 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003997888s
	[INFO] 10.244.0.25:42237 - 4179 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.004542823s
	
	
	==> describe nodes <==
	Name:               addons-608075
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-608075
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cb9e6220ecbd737c1d09ad9630c6f144f437664a
	                    minikube.k8s.io/name=addons-608075
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_25T18_30_30_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-608075
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 25 Sep 2024 18:30:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-608075
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 25 Sep 2024 18:43:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 25 Sep 2024 18:43:06 +0000   Wed, 25 Sep 2024 18:30:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 25 Sep 2024 18:43:06 +0000   Wed, 25 Sep 2024 18:30:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 25 Sep 2024 18:43:06 +0000   Wed, 25 Sep 2024 18:30:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 25 Sep 2024 18:43:06 +0000   Wed, 25 Sep 2024 18:30:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.81
	  Hostname:    addons-608075
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 392d337b0cbc4b02b026f2de9a6e8aa4
	  System UUID:                392d337b-0cbc-4b02-b026-f2de9a6e8aa4
	  Boot ID:                    a1b37946-fafe-45a8-987a-216186814f35
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m17s
	  default                     hello-world-app-55bf9c44b4-4dnbt           0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         62s
	  gcp-auth                    gcp-auth-89d5ffd79-zn6tn                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-7c65d6cfc9-k9mm2                   100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 etcd-addons-608075                         100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-608075               250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-608075      200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-2lgvx                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-608075               100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  local-path-storage          local-path-provisioner-86d989889c-lvskx    0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node addons-608075 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node addons-608075 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node addons-608075 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node addons-608075 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node addons-608075 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet          Node addons-608075 status is now: NodeHasSufficientPID
	  Normal  NodeReady                12m                kubelet          Node addons-608075 status is now: NodeReady
	  Normal  RegisteredNode           12m                node-controller  Node addons-608075 event: Registered Node addons-608075 in Controller
	
	
	==> dmesg <==
	[ +19.056187] kauditd_printk_skb: 37 callbacks suppressed
	[  +5.847500] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.835163] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.100710] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.224993] kauditd_printk_skb: 75 callbacks suppressed
	[Sep25 18:32] kauditd_printk_skb: 19 callbacks suppressed
	[ +28.020828] kauditd_printk_skb: 32 callbacks suppressed
	[ +21.274777] kauditd_printk_skb: 28 callbacks suppressed
	[Sep25 18:33] kauditd_printk_skb: 40 callbacks suppressed
	[ +21.456208] kauditd_printk_skb: 47 callbacks suppressed
	[  +6.744970] kauditd_printk_skb: 2 callbacks suppressed
	[Sep25 18:34] kauditd_printk_skb: 20 callbacks suppressed
	[ +19.596519] kauditd_printk_skb: 2 callbacks suppressed
	[Sep25 18:37] kauditd_printk_skb: 28 callbacks suppressed
	[Sep25 18:42] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.348926] kauditd_printk_skb: 6 callbacks suppressed
	[  +6.153434] kauditd_printk_skb: 17 callbacks suppressed
	[ +11.657443] kauditd_printk_skb: 36 callbacks suppressed
	[  +5.463984] kauditd_printk_skb: 44 callbacks suppressed
	[  +5.049493] kauditd_printk_skb: 35 callbacks suppressed
	[  +5.812038] kauditd_printk_skb: 16 callbacks suppressed
	[  +7.646154] kauditd_printk_skb: 22 callbacks suppressed
	[Sep25 18:43] kauditd_printk_skb: 7 callbacks suppressed
	[  +8.802705] kauditd_printk_skb: 33 callbacks suppressed
	[  +7.928435] kauditd_printk_skb: 6 callbacks suppressed
	
	
	==> etcd [82e7faa1a5f7] <==
	{"level":"warn","ts":"2024-09-25T18:31:58.869486Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-25T18:31:58.544819Z","time spent":"324.607681ms","remote":"127.0.0.1:42962","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-09-25T18:31:58.870043Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"401.192185ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-25T18:31:58.870396Z","caller":"traceutil/trace.go:171","msg":"trace[1441561532] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1239; }","duration":"401.542633ms","start":"2024-09-25T18:31:58.468842Z","end":"2024-09-25T18:31:58.870384Z","steps":["trace[1441561532] 'range keys from in-memory index tree'  (duration: 401.183854ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-25T18:31:58.870888Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.070555ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-25T18:31:58.871343Z","caller":"traceutil/trace.go:171","msg":"trace[491301986] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1239; }","duration":"124.372631ms","start":"2024-09-25T18:31:58.746804Z","end":"2024-09-25T18:31:58.871177Z","steps":["trace[491301986] 'range keys from in-memory index tree'  (duration: 123.982936ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-25T18:31:58.869982Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"209.499868ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-25T18:31:58.871907Z","caller":"traceutil/trace.go:171","msg":"trace[1598768220] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1239; }","duration":"211.43014ms","start":"2024-09-25T18:31:58.660465Z","end":"2024-09-25T18:31:58.871895Z","steps":["trace[1598768220] 'range keys from in-memory index tree'  (duration: 209.044882ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-25T18:31:58.871571Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"377.74596ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:498"}
	{"level":"info","ts":"2024-09-25T18:31:58.874723Z","caller":"traceutil/trace.go:171","msg":"trace[441027704] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1239; }","duration":"381.006192ms","start":"2024-09-25T18:31:58.493702Z","end":"2024-09-25T18:31:58.874708Z","steps":["trace[441027704] 'range keys from in-memory index tree'  (duration: 377.596188ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-25T18:31:58.875572Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-25T18:31:58.493659Z","time spent":"381.341853ms","remote":"127.0.0.1:43036","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":1,"response size":522,"request content":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" "}
	{"level":"info","ts":"2024-09-25T18:33:45.135177Z","caller":"traceutil/trace.go:171","msg":"trace[1414128994] linearizableReadLoop","detail":"{readStateIndex:1627; appliedIndex:1626; }","duration":"117.490218ms","start":"2024-09-25T18:33:45.017653Z","end":"2024-09-25T18:33:45.135144Z","steps":["trace[1414128994] 'read index received'  (duration: 117.34389ms)","trace[1414128994] 'applied index is now lower than readState.Index'  (duration: 145.885µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-25T18:33:45.135452Z","caller":"traceutil/trace.go:171","msg":"trace[1588537843] transaction","detail":"{read_only:false; response_revision:1566; number_of_response:1; }","duration":"260.975181ms","start":"2024-09-25T18:33:44.874457Z","end":"2024-09-25T18:33:45.135433Z","steps":["trace[1588537843] 'process raft request'  (duration: 260.577567ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-25T18:33:45.135522Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"117.792665ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/registry-proxy-r2w4s.17f8903b52b23508\" ","response":"range_response_count:1 size:811"}
	{"level":"info","ts":"2024-09-25T18:33:45.135560Z","caller":"traceutil/trace.go:171","msg":"trace[596599833] range","detail":"{range_begin:/registry/events/kube-system/registry-proxy-r2w4s.17f8903b52b23508; range_end:; response_count:1; response_revision:1566; }","duration":"117.893153ms","start":"2024-09-25T18:33:45.017649Z","end":"2024-09-25T18:33:45.135543Z","steps":["trace[596599833] 'agreement among raft nodes before linearized reading'  (duration: 117.69845ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-25T18:33:46.582076Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"113.050538ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-25T18:33:46.582149Z","caller":"traceutil/trace.go:171","msg":"trace[410940688] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1570; }","duration":"113.149768ms","start":"2024-09-25T18:33:46.468988Z","end":"2024-09-25T18:33:46.582138Z","steps":["trace[410940688] 'range keys from in-memory index tree'  (duration: 113.033783ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-25T18:40:25.564628Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1880}
	{"level":"info","ts":"2024-09-25T18:40:25.675559Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1880,"took":"110.280426ms","hash":2049460604,"current-db-size-bytes":8896512,"current-db-size":"8.9 MB","current-db-size-in-use-bytes":5033984,"current-db-size-in-use":"5.0 MB"}
	{"level":"info","ts":"2024-09-25T18:40:25.675635Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2049460604,"revision":1880,"compact-revision":-1}
	{"level":"info","ts":"2024-09-25T18:42:12.721619Z","caller":"traceutil/trace.go:171","msg":"trace[73776205] linearizableReadLoop","detail":"{readStateIndex:2652; appliedIndex:2651; }","duration":"253.553212ms","start":"2024-09-25T18:42:12.468029Z","end":"2024-09-25T18:42:12.721583Z","steps":["trace[73776205] 'read index received'  (duration: 253.333312ms)","trace[73776205] 'applied index is now lower than readState.Index'  (duration: 219.455µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-25T18:42:12.721803Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"253.72245ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-25T18:42:12.721826Z","caller":"traceutil/trace.go:171","msg":"trace[1354986735] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:2479; }","duration":"253.79592ms","start":"2024-09-25T18:42:12.468023Z","end":"2024-09-25T18:42:12.721818Z","steps":["trace[1354986735] 'agreement among raft nodes before linearized reading'  (duration: 253.707998ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-25T18:42:12.721893Z","caller":"traceutil/trace.go:171","msg":"trace[1737790175] transaction","detail":"{read_only:false; response_revision:2479; number_of_response:1; }","duration":"286.162829ms","start":"2024-09-25T18:42:12.435711Z","end":"2024-09-25T18:42:12.721874Z","steps":["trace[1737790175] 'process raft request'  (duration: 285.76298ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-25T18:42:12.999895Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"136.978829ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-25T18:42:13.000117Z","caller":"traceutil/trace.go:171","msg":"trace[336105085] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:2480; }","duration":"137.162164ms","start":"2024-09-25T18:42:12.862868Z","end":"2024-09-25T18:42:13.000031Z","steps":["trace[336105085] 'range keys from in-memory index tree'  (duration: 136.93277ms)"],"step_count":1}
	
	
	==> gcp-auth [2d6958de95b9] <==
	2024/09/25 18:34:04 Ready to write response ...
	2024/09/25 18:34:04 Ready to marshal response ...
	2024/09/25 18:34:04 Ready to write response ...
	2024/09/25 18:42:07 Ready to marshal response ...
	2024/09/25 18:42:07 Ready to write response ...
	2024/09/25 18:42:07 Ready to marshal response ...
	2024/09/25 18:42:07 Ready to write response ...
	2024/09/25 18:42:07 Ready to marshal response ...
	2024/09/25 18:42:07 Ready to write response ...
	2024/09/25 18:42:19 Ready to marshal response ...
	2024/09/25 18:42:19 Ready to write response ...
	2024/09/25 18:42:19 Ready to marshal response ...
	2024/09/25 18:42:19 Ready to write response ...
	2024/09/25 18:42:30 Ready to marshal response ...
	2024/09/25 18:42:30 Ready to write response ...
	2024/09/25 18:42:32 Ready to marshal response ...
	2024/09/25 18:42:32 Ready to write response ...
	2024/09/25 18:42:38 Ready to marshal response ...
	2024/09/25 18:42:38 Ready to write response ...
	2024/09/25 18:42:38 Ready to marshal response ...
	2024/09/25 18:42:38 Ready to write response ...
	2024/09/25 18:42:48 Ready to marshal response ...
	2024/09/25 18:42:48 Ready to write response ...
	2024/09/25 18:42:54 Ready to marshal response ...
	2024/09/25 18:42:54 Ready to write response ...
	
	
	==> kernel <==
	 18:43:21 up 13 min,  0 users,  load average: 0.98, 0.69, 0.60
	Linux addons-608075 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [bde32e408ce1] <==
	W0925 18:33:55.959264       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0925 18:33:56.312386       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0925 18:33:56.522813       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0925 18:33:56.769763       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0925 18:42:07.774830       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.111.6.247"}
	I0925 18:42:19.007484       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0925 18:42:19.252702       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.104.62.110"}
	I0925 18:42:19.616075       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0925 18:42:20.714365       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0925 18:42:28.651037       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0925 18:42:30.832736       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.109.255.116"}
	I0925 18:42:40.070633       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0925 18:43:11.196274       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0925 18:43:11.196320       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0925 18:43:11.232831       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0925 18:43:11.234245       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0925 18:43:11.235414       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0925 18:43:11.235889       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0925 18:43:11.246905       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0925 18:43:11.247371       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0925 18:43:11.298136       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0925 18:43:11.298187       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0925 18:43:12.236415       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0925 18:43:12.299283       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0925 18:43:12.389111       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [d50d79b64006] <==
	E0925 18:43:12.301849       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	E0925 18:43:12.391292       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0925 18:43:12.814170       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0925 18:43:12.814348       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0925 18:43:13.435914       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0925 18:43:13.436304       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0925 18:43:13.622652       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0925 18:43:13.622865       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0925 18:43:13.883339       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0925 18:43:13.883502       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0925 18:43:14.838363       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0925 18:43:14.838632       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0925 18:43:15.366957       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0925 18:43:15.367026       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0925 18:43:15.791616       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0925 18:43:15.791847       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0925 18:43:16.506140       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0925 18:43:16.506261       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0925 18:43:18.666271       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0925 18:43:18.666310       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0925 18:43:19.483264       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0925 18:43:19.483315       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0925 18:43:19.817181       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="12.401µs"
	W0925 18:43:19.887932       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0925 18:43:19.888050       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [19847d7bf61b] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0925 18:30:36.384919       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0925 18:30:36.399612       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.81"]
	E0925 18:30:36.399691       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0925 18:30:36.481498       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0925 18:30:36.481547       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0925 18:30:36.481572       1 server_linux.go:169] "Using iptables Proxier"
	I0925 18:30:36.485846       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0925 18:30:36.486107       1 server.go:483] "Version info" version="v1.31.1"
	I0925 18:30:36.486119       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0925 18:30:36.487581       1 config.go:199] "Starting service config controller"
	I0925 18:30:36.487609       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0925 18:30:36.487638       1 config.go:328] "Starting node config controller"
	I0925 18:30:36.487642       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0925 18:30:36.487988       1 config.go:105] "Starting endpoint slice config controller"
	I0925 18:30:36.488013       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0925 18:30:36.642568       1 shared_informer.go:320] Caches are synced for service config
	I0925 18:30:36.642824       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0925 18:30:36.689151       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [19d5df7def7a] <==
	W0925 18:30:28.058126       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0925 18:30:28.058188       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0925 18:30:28.066116       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0925 18:30:28.066232       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0925 18:30:28.169360       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0925 18:30:28.169409       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0925 18:30:28.173469       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0925 18:30:28.173650       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0925 18:30:28.191484       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0925 18:30:28.191686       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0925 18:30:28.283472       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0925 18:30:28.283524       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0925 18:30:28.321071       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0925 18:30:28.321172       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0925 18:30:28.371238       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0925 18:30:28.371362       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0925 18:30:28.386949       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0925 18:30:28.387004       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0925 18:30:28.415469       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0925 18:30:28.415650       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0925 18:30:28.449175       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0925 18:30:28.449458       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0925 18:30:28.466269       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0925 18:30:28.466495       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0925 18:30:30.755406       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 25 18:43:11 addons-608075 kubelet[1968]: I0925 18:43:11.963785    1968 scope.go:117] "RemoveContainer" containerID="e1b6e283f83c16ad3b6c677c7b912f9d5173aee91b60dcd919bd5c611546c616"
	Sep 25 18:43:11 addons-608075 kubelet[1968]: E0925 18:43:11.965588    1968 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: e1b6e283f83c16ad3b6c677c7b912f9d5173aee91b60dcd919bd5c611546c616" containerID="e1b6e283f83c16ad3b6c677c7b912f9d5173aee91b60dcd919bd5c611546c616"
	Sep 25 18:43:11 addons-608075 kubelet[1968]: I0925 18:43:11.965645    1968 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"e1b6e283f83c16ad3b6c677c7b912f9d5173aee91b60dcd919bd5c611546c616"} err="failed to get container status \"e1b6e283f83c16ad3b6c677c7b912f9d5173aee91b60dcd919bd5c611546c616\": rpc error: code = Unknown desc = Error response from daemon: No such container: e1b6e283f83c16ad3b6c677c7b912f9d5173aee91b60dcd919bd5c611546c616"
	Sep 25 18:43:11 addons-608075 kubelet[1968]: I0925 18:43:11.965671    1968 scope.go:117] "RemoveContainer" containerID="f17190b160abe779dd9befb4d71e0f42a75cd513940d810e4f99069c12280662"
	Sep 25 18:43:11 addons-608075 kubelet[1968]: I0925 18:43:11.972013    1968 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-rfk52\" (UniqueName: \"kubernetes.io/projected/e8928774-bdaf-45cb-99af-8eb95e1090ec-kube-api-access-rfk52\") on node \"addons-608075\" DevicePath \"\""
	Sep 25 18:43:11 addons-608075 kubelet[1968]: I0925 18:43:11.972164    1968 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-cwrcg\" (UniqueName: \"kubernetes.io/projected/a08b8824-43ad-47b3-b482-99ba444ed213-kube-api-access-cwrcg\") on node \"addons-608075\" DevicePath \"\""
	Sep 25 18:43:11 addons-608075 kubelet[1968]: I0925 18:43:11.989755    1968 scope.go:117] "RemoveContainer" containerID="f17190b160abe779dd9befb4d71e0f42a75cd513940d810e4f99069c12280662"
	Sep 25 18:43:11 addons-608075 kubelet[1968]: E0925 18:43:11.990745    1968 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: f17190b160abe779dd9befb4d71e0f42a75cd513940d810e4f99069c12280662" containerID="f17190b160abe779dd9befb4d71e0f42a75cd513940d810e4f99069c12280662"
	Sep 25 18:43:11 addons-608075 kubelet[1968]: I0925 18:43:11.990931    1968 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"f17190b160abe779dd9befb4d71e0f42a75cd513940d810e4f99069c12280662"} err="failed to get container status \"f17190b160abe779dd9befb4d71e0f42a75cd513940d810e4f99069c12280662\": rpc error: code = Unknown desc = Error response from daemon: No such container: f17190b160abe779dd9befb4d71e0f42a75cd513940d810e4f99069c12280662"
	Sep 25 18:43:12 addons-608075 kubelet[1968]: I0925 18:43:12.028518    1968 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a08b8824-43ad-47b3-b482-99ba444ed213" path="/var/lib/kubelet/pods/a08b8824-43ad-47b3-b482-99ba444ed213/volumes"
	Sep 25 18:43:12 addons-608075 kubelet[1968]: I0925 18:43:12.029130    1968 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8928774-bdaf-45cb-99af-8eb95e1090ec" path="/var/lib/kubelet/pods/e8928774-bdaf-45cb-99af-8eb95e1090ec/volumes"
	Sep 25 18:43:15 addons-608075 kubelet[1968]: E0925 18:43:15.018878    1968 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox\\\"\"" pod="default/registry-test" podUID="352c2bb5-5c77-4b13-b8f7-3aa88f5748aa"
	Sep 25 18:43:19 addons-608075 kubelet[1968]: I0925 18:43:19.629287    1968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/352c2bb5-5c77-4b13-b8f7-3aa88f5748aa-gcp-creds\") pod \"352c2bb5-5c77-4b13-b8f7-3aa88f5748aa\" (UID: \"352c2bb5-5c77-4b13-b8f7-3aa88f5748aa\") "
	Sep 25 18:43:19 addons-608075 kubelet[1968]: I0925 18:43:19.629354    1968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwrnb\" (UniqueName: \"kubernetes.io/projected/352c2bb5-5c77-4b13-b8f7-3aa88f5748aa-kube-api-access-bwrnb\") pod \"352c2bb5-5c77-4b13-b8f7-3aa88f5748aa\" (UID: \"352c2bb5-5c77-4b13-b8f7-3aa88f5748aa\") "
	Sep 25 18:43:19 addons-608075 kubelet[1968]: I0925 18:43:19.629848    1968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/352c2bb5-5c77-4b13-b8f7-3aa88f5748aa-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "352c2bb5-5c77-4b13-b8f7-3aa88f5748aa" (UID: "352c2bb5-5c77-4b13-b8f7-3aa88f5748aa"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 25 18:43:19 addons-608075 kubelet[1968]: I0925 18:43:19.631853    1968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/352c2bb5-5c77-4b13-b8f7-3aa88f5748aa-kube-api-access-bwrnb" (OuterVolumeSpecName: "kube-api-access-bwrnb") pod "352c2bb5-5c77-4b13-b8f7-3aa88f5748aa" (UID: "352c2bb5-5c77-4b13-b8f7-3aa88f5748aa"). InnerVolumeSpecName "kube-api-access-bwrnb". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 25 18:43:19 addons-608075 kubelet[1968]: I0925 18:43:19.730463    1968 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/352c2bb5-5c77-4b13-b8f7-3aa88f5748aa-gcp-creds\") on node \"addons-608075\" DevicePath \"\""
	Sep 25 18:43:19 addons-608075 kubelet[1968]: I0925 18:43:19.730497    1968 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-bwrnb\" (UniqueName: \"kubernetes.io/projected/352c2bb5-5c77-4b13-b8f7-3aa88f5748aa-kube-api-access-bwrnb\") on node \"addons-608075\" DevicePath \"\""
	Sep 25 18:43:20 addons-608075 kubelet[1968]: I0925 18:43:20.265949    1968 scope.go:117] "RemoveContainer" containerID="1f2c79d43bf3b8dcff9a779374b73ed8ac97669b70f4198c70aba492dc7afe52"
	Sep 25 18:43:20 addons-608075 kubelet[1968]: I0925 18:43:20.341942    1968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8httm\" (UniqueName: \"kubernetes.io/projected/2ec7da64-0a87-4bfe-a46b-b23794d946ae-kube-api-access-8httm\") pod \"2ec7da64-0a87-4bfe-a46b-b23794d946ae\" (UID: \"2ec7da64-0a87-4bfe-a46b-b23794d946ae\") "
	Sep 25 18:43:20 addons-608075 kubelet[1968]: I0925 18:43:20.348891    1968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ec7da64-0a87-4bfe-a46b-b23794d946ae-kube-api-access-8httm" (OuterVolumeSpecName: "kube-api-access-8httm") pod "2ec7da64-0a87-4bfe-a46b-b23794d946ae" (UID: "2ec7da64-0a87-4bfe-a46b-b23794d946ae"). InnerVolumeSpecName "kube-api-access-8httm". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 25 18:43:20 addons-608075 kubelet[1968]: I0925 18:43:20.448502    1968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8v2gt\" (UniqueName: \"kubernetes.io/projected/3d389aa0-4f88-41fd-a0f0-8ee90b81a8a3-kube-api-access-8v2gt\") pod \"3d389aa0-4f88-41fd-a0f0-8ee90b81a8a3\" (UID: \"3d389aa0-4f88-41fd-a0f0-8ee90b81a8a3\") "
	Sep 25 18:43:20 addons-608075 kubelet[1968]: I0925 18:43:20.448601    1968 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-8httm\" (UniqueName: \"kubernetes.io/projected/2ec7da64-0a87-4bfe-a46b-b23794d946ae-kube-api-access-8httm\") on node \"addons-608075\" DevicePath \"\""
	Sep 25 18:43:20 addons-608075 kubelet[1968]: I0925 18:43:20.451179    1968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d389aa0-4f88-41fd-a0f0-8ee90b81a8a3-kube-api-access-8v2gt" (OuterVolumeSpecName: "kube-api-access-8v2gt") pod "3d389aa0-4f88-41fd-a0f0-8ee90b81a8a3" (UID: "3d389aa0-4f88-41fd-a0f0-8ee90b81a8a3"). InnerVolumeSpecName "kube-api-access-8v2gt". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 25 18:43:20 addons-608075 kubelet[1968]: I0925 18:43:20.549730    1968 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-8v2gt\" (UniqueName: \"kubernetes.io/projected/3d389aa0-4f88-41fd-a0f0-8ee90b81a8a3-kube-api-access-8v2gt\") on node \"addons-608075\" DevicePath \"\""
	
	
	==> storage-provisioner [b8f8e5ae0000] <==
	I0925 18:30:45.612634       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0925 18:30:45.769414       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0925 18:30:45.769506       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0925 18:30:45.875313       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0925 18:30:45.876866       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1225b50a-1167-42bd-a8bd-ca8661943946", APIVersion:"v1", ResourceVersion:"671", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-608075_8fc1d46a-f2a4-41b0-b825-e094353c3532 became leader
	I0925 18:30:45.877528       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-608075_8fc1d46a-f2a4-41b0-b825-e094353c3532!
	I0925 18:30:45.979682       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-608075_8fc1d46a-f2a4-41b0-b825-e094353c3532!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-608075 -n addons-608075
helpers_test.go:261: (dbg) Run:  kubectl --context addons-608075 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-608075 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-608075 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-608075/192.168.39.81
	Start Time:       Wed, 25 Sep 2024 18:34:04 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vrwb7 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-vrwb7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m17s                   default-scheduler  Successfully assigned default/busybox to addons-608075
	  Normal   Pulling    7m56s (x4 over 9m16s)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m56s (x4 over 9m16s)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m56s (x4 over 9m16s)   kubelet            Error: ErrImagePull
	  Warning  Failed     7m28s (x6 over 9m16s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m15s (x19 over 9m16s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (74.88s)

                                                
                                    

Test pass (308/340)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 8.53
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.05
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.1/json-events 4.96
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.06
18 TestDownloadOnly/v1.31.1/DeleteAll 0.13
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.6
22 TestOffline 94.31
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 220.46
29 TestAddons/serial/Volcano 41.63
31 TestAddons/serial/GCPAuth/Namespaces 0.14
34 TestAddons/parallel/Ingress 22.27
35 TestAddons/parallel/InspektorGadget 11.21
36 TestAddons/parallel/MetricsServer 6.99
38 TestAddons/parallel/CSI 46.95
39 TestAddons/parallel/Headlamp 17.68
40 TestAddons/parallel/CloudSpanner 6.68
41 TestAddons/parallel/LocalPath 10.04
42 TestAddons/parallel/NvidiaDevicePlugin 6.43
43 TestAddons/parallel/Yakd 11.71
44 TestAddons/StoppedEnableDisable 13.57
45 TestCertOptions 96.49
46 TestCertExpiration 316.71
47 TestDockerFlags 95.94
48 TestForceSystemdFlag 72.9
49 TestForceSystemdEnv 66.24
51 TestKVMDriverInstallOrUpdate 4.56
55 TestErrorSpam/setup 51.66
56 TestErrorSpam/start 0.37
57 TestErrorSpam/status 0.75
58 TestErrorSpam/pause 1.25
59 TestErrorSpam/unpause 1.48
60 TestErrorSpam/stop 6.01
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 63.87
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 38.27
67 TestFunctional/serial/KubeContext 0.04
68 TestFunctional/serial/KubectlGetPods 0.1
71 TestFunctional/serial/CacheCmd/cache/add_remote 2.42
72 TestFunctional/serial/CacheCmd/cache/add_local 0.99
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
74 TestFunctional/serial/CacheCmd/cache/list 0.04
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
76 TestFunctional/serial/CacheCmd/cache/cache_reload 1.18
77 TestFunctional/serial/CacheCmd/cache/delete 0.09
78 TestFunctional/serial/MinikubeKubectlCmd 0.1
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
80 TestFunctional/serial/ExtraConfig 40.65
81 TestFunctional/serial/ComponentHealth 0.07
82 TestFunctional/serial/LogsCmd 1.03
83 TestFunctional/serial/LogsFileCmd 1.06
84 TestFunctional/serial/InvalidService 4.88
86 TestFunctional/parallel/ConfigCmd 0.34
87 TestFunctional/parallel/DashboardCmd 38.37
88 TestFunctional/parallel/DryRun 0.31
89 TestFunctional/parallel/InternationalLanguage 0.15
90 TestFunctional/parallel/StatusCmd 1
94 TestFunctional/parallel/ServiceCmdConnect 11.6
95 TestFunctional/parallel/AddonsCmd 0.13
96 TestFunctional/parallel/PersistentVolumeClaim 49.13
98 TestFunctional/parallel/SSHCmd 0.41
99 TestFunctional/parallel/CpCmd 1.33
100 TestFunctional/parallel/MySQL 32.94
101 TestFunctional/parallel/FileSync 0.27
102 TestFunctional/parallel/CertSync 1.33
106 TestFunctional/parallel/NodeLabels 0.31
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.27
110 TestFunctional/parallel/License 0.19
120 TestFunctional/parallel/ServiceCmd/DeployApp 11.24
121 TestFunctional/parallel/ProfileCmd/profile_not_create 0.39
122 TestFunctional/parallel/ProfileCmd/profile_list 0.37
123 TestFunctional/parallel/ProfileCmd/profile_json_output 0.38
124 TestFunctional/parallel/MountCmd/any-port 7.38
125 TestFunctional/parallel/MountCmd/specific-port 1.93
126 TestFunctional/parallel/MountCmd/VerifyCleanup 1.76
127 TestFunctional/parallel/ServiceCmd/List 0.29
128 TestFunctional/parallel/ServiceCmd/JSONOutput 0.3
129 TestFunctional/parallel/ServiceCmd/HTTPS 0.36
130 TestFunctional/parallel/ServiceCmd/Format 0.34
131 TestFunctional/parallel/DockerEnv/bash 1.09
132 TestFunctional/parallel/ServiceCmd/URL 0.38
133 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
134 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
135 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
136 TestFunctional/parallel/Version/short 0.05
137 TestFunctional/parallel/Version/components 0.6
138 TestFunctional/parallel/ImageCommands/ImageListShort 0.21
139 TestFunctional/parallel/ImageCommands/ImageListTable 0.21
140 TestFunctional/parallel/ImageCommands/ImageListJson 0.2
141 TestFunctional/parallel/ImageCommands/ImageListYaml 0.2
142 TestFunctional/parallel/ImageCommands/ImageBuild 3.51
143 TestFunctional/parallel/ImageCommands/Setup 1.04
144 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.55
145 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.76
146 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.23
147 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.49
148 TestFunctional/parallel/ImageCommands/ImageRemove 0.4
149 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.73
150 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.55
151 TestFunctional/delete_echo-server_images 0.04
152 TestFunctional/delete_my-image_image 0.02
153 TestFunctional/delete_minikube_cached_images 0.02
154 TestGvisorAddon 204.82
157 TestMultiControlPlane/serial/StartCluster 223.34
158 TestMultiControlPlane/serial/DeployApp 36.57
159 TestMultiControlPlane/serial/PingHostFromPods 1.31
160 TestMultiControlPlane/serial/AddWorkerNode 61.84
161 TestMultiControlPlane/serial/NodeLabels 0.08
162 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.89
163 TestMultiControlPlane/serial/CopyFile 13.03
164 TestMultiControlPlane/serial/StopSecondaryNode 13.94
165 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.66
166 TestMultiControlPlane/serial/RestartSecondaryNode 43.15
167 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.88
168 TestMultiControlPlane/serial/RestartClusterKeepsNodes 227.64
169 TestMultiControlPlane/serial/DeleteSecondaryNode 7.4
170 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.66
171 TestMultiControlPlane/serial/StopCluster 39.03
172 TestMultiControlPlane/serial/RestartCluster 158.53
173 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.64
174 TestMultiControlPlane/serial/AddSecondaryNode 84.63
175 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.92
178 TestImageBuild/serial/Setup 50.91
179 TestImageBuild/serial/NormalBuild 1.49
180 TestImageBuild/serial/BuildWithBuildArg 0.99
181 TestImageBuild/serial/BuildWithDockerIgnore 0.81
182 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.86
186 TestJSONOutput/start/Command 93.12
187 TestJSONOutput/start/Audit 0
189 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Command 0.59
193 TestJSONOutput/pause/Audit 0
195 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Command 0.52
199 TestJSONOutput/unpause/Audit 0
201 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
204 TestJSONOutput/stop/Command 13.31
205 TestJSONOutput/stop/Audit 0
207 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
208 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
209 TestErrorJSONOutput 0.2
214 TestMainNoArgs 0.04
215 TestMinikubeProfile 102.46
218 TestMountStart/serial/StartWithMountFirst 28.92
219 TestMountStart/serial/VerifyMountFirst 0.37
220 TestMountStart/serial/StartWithMountSecond 30.87
221 TestMountStart/serial/VerifyMountSecond 0.49
222 TestMountStart/serial/DeleteFirst 0.74
223 TestMountStart/serial/VerifyMountPostDelete 0.38
224 TestMountStart/serial/Stop 2.28
225 TestMountStart/serial/RestartStopped 25.69
226 TestMountStart/serial/VerifyMountPostStop 0.37
229 TestMultiNode/serial/FreshStart2Nodes 132.89
230 TestMultiNode/serial/DeployApp2Nodes 3.49
231 TestMultiNode/serial/PingHostFrom2Pods 0.85
232 TestMultiNode/serial/AddNode 55.4
233 TestMultiNode/serial/MultiNodeLabels 0.06
234 TestMultiNode/serial/ProfileList 0.58
235 TestMultiNode/serial/CopyFile 7.14
236 TestMultiNode/serial/StopNode 3.43
237 TestMultiNode/serial/StartAfterStop 42.37
238 TestMultiNode/serial/RestartKeepsNodes 193.13
239 TestMultiNode/serial/DeleteNode 2.3
240 TestMultiNode/serial/StopMultiNode 25.14
241 TestMultiNode/serial/RestartMultiNode 116.73
242 TestMultiNode/serial/ValidateNameConflict 52.01
247 TestPreload 195.13
249 TestScheduledStopUnix 125.77
250 TestSkaffold 132.71
253 TestRunningBinaryUpgrade 215.4
255 TestKubernetesUpgrade 234.04
263 TestPause/serial/Start 122.79
277 TestPause/serial/SecondStartNoReconfiguration 84.18
278 TestPause/serial/Pause 0.62
279 TestPause/serial/VerifyStatus 0.27
280 TestPause/serial/Unpause 0.62
281 TestPause/serial/PauseAgain 0.82
282 TestPause/serial/DeletePaused 1.14
283 TestPause/serial/VerifyDeletedResources 5.42
285 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
286 TestNoKubernetes/serial/StartWithK8s 89.15
287 TestStoppedBinaryUpgrade/Setup 0.44
288 TestStoppedBinaryUpgrade/Upgrade 158.74
289 TestNoKubernetes/serial/StartWithStopK8s 53.21
290 TestNoKubernetes/serial/Start 59.33
291 TestStoppedBinaryUpgrade/MinikubeLogs 1.12
292 TestNoKubernetes/serial/VerifyK8sNotRunning 0.19
293 TestNoKubernetes/serial/ProfileList 0.94
294 TestNoKubernetes/serial/Stop 2.28
295 TestNoKubernetes/serial/StartNoArgs 61.13
296 TestNetworkPlugins/group/auto/Start 78.97
297 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.24
298 TestNetworkPlugins/group/kindnet/Start 107.21
299 TestNetworkPlugins/group/calico/Start 135.4
300 TestNetworkPlugins/group/auto/KubeletFlags 0.22
301 TestNetworkPlugins/group/auto/NetCatPod 13.3
302 TestNetworkPlugins/group/auto/DNS 0.22
303 TestNetworkPlugins/group/auto/Localhost 0.14
304 TestNetworkPlugins/group/auto/HairPin 0.15
305 TestNetworkPlugins/group/custom-flannel/Start 77.61
306 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
307 TestNetworkPlugins/group/false/Start 74.94
308 TestNetworkPlugins/group/kindnet/KubeletFlags 0.21
309 TestNetworkPlugins/group/kindnet/NetCatPod 12.34
310 TestNetworkPlugins/group/kindnet/DNS 0.24
311 TestNetworkPlugins/group/kindnet/Localhost 0.21
312 TestNetworkPlugins/group/kindnet/HairPin 0.18
313 TestNetworkPlugins/group/calico/ControllerPod 6.01
314 TestNetworkPlugins/group/enable-default-cni/Start 70.72
315 TestNetworkPlugins/group/calico/KubeletFlags 0.27
316 TestNetworkPlugins/group/calico/NetCatPod 14.48
317 TestNetworkPlugins/group/calico/DNS 0.24
318 TestNetworkPlugins/group/calico/Localhost 0.2
319 TestNetworkPlugins/group/calico/HairPin 0.18
320 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.22
321 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.26
322 TestNetworkPlugins/group/custom-flannel/DNS 0.23
323 TestNetworkPlugins/group/custom-flannel/Localhost 0.29
324 TestNetworkPlugins/group/custom-flannel/HairPin 0.21
325 TestNetworkPlugins/group/flannel/Start 74.84
326 TestNetworkPlugins/group/false/KubeletFlags 0.23
327 TestNetworkPlugins/group/false/NetCatPod 12.25
328 TestNetworkPlugins/group/false/DNS 0.18
329 TestNetworkPlugins/group/false/Localhost 0.19
330 TestNetworkPlugins/group/false/HairPin 0.15
331 TestNetworkPlugins/group/bridge/Start 86.72
332 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.27
333 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.35
334 TestNetworkPlugins/group/kubenet/Start 130.3
335 TestNetworkPlugins/group/enable-default-cni/DNS 0.21
336 TestNetworkPlugins/group/enable-default-cni/Localhost 0.17
337 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
339 TestStartStop/group/old-k8s-version/serial/FirstStart 179.57
340 TestNetworkPlugins/group/flannel/ControllerPod 6.01
341 TestNetworkPlugins/group/flannel/KubeletFlags 0.22
342 TestNetworkPlugins/group/flannel/NetCatPod 11.25
343 TestNetworkPlugins/group/flannel/DNS 0.24
344 TestNetworkPlugins/group/flannel/Localhost 0.17
345 TestNetworkPlugins/group/flannel/HairPin 0.16
346 TestNetworkPlugins/group/bridge/KubeletFlags 0.24
347 TestNetworkPlugins/group/bridge/NetCatPod 11.31
349 TestStartStop/group/no-preload/serial/FirstStart 107.15
350 TestNetworkPlugins/group/bridge/DNS 0.17
351 TestNetworkPlugins/group/bridge/Localhost 0.14
352 TestNetworkPlugins/group/bridge/HairPin 0.15
354 TestStartStop/group/embed-certs/serial/FirstStart 77.3
355 TestNetworkPlugins/group/kubenet/KubeletFlags 0.23
356 TestNetworkPlugins/group/kubenet/NetCatPod 10.3
357 TestNetworkPlugins/group/kubenet/DNS 0.18
358 TestNetworkPlugins/group/kubenet/Localhost 0.21
359 TestNetworkPlugins/group/kubenet/HairPin 0.19
361 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 68.07
362 TestStartStop/group/embed-certs/serial/DeployApp 8.3
363 TestStartStop/group/no-preload/serial/DeployApp 8.38
364 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.04
365 TestStartStop/group/embed-certs/serial/Stop 13.4
366 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.76
367 TestStartStop/group/no-preload/serial/Stop 13.37
368 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
369 TestStartStop/group/embed-certs/serial/SecondStart 306.29
370 TestStartStop/group/old-k8s-version/serial/DeployApp 8.59
371 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.22
372 TestStartStop/group/no-preload/serial/SecondStart 315.11
373 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.07
374 TestStartStop/group/old-k8s-version/serial/Stop 13.35
375 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 7.37
376 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
377 TestStartStop/group/old-k8s-version/serial/SecondStart 425.96
378 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.13
379 TestStartStop/group/default-k8s-diff-port/serial/Stop 13.38
380 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
381 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 336.84
382 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
383 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
384 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
385 TestStartStop/group/embed-certs/serial/Pause 2.53
387 TestStartStop/group/newest-cni/serial/FirstStart 63.83
388 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
389 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
390 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.23
391 TestStartStop/group/no-preload/serial/Pause 2.55
392 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 12.01
393 TestStartStop/group/newest-cni/serial/DeployApp 0
394 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.97
395 TestStartStop/group/newest-cni/serial/Stop 13.34
396 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
397 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
398 TestStartStop/group/newest-cni/serial/SecondStart 39.26
399 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.23
400 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.61
401 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
402 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
403 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.22
404 TestStartStop/group/newest-cni/serial/Pause 2.78
405 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
406 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
407 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.21
408 TestStartStop/group/old-k8s-version/serial/Pause 2.38
x
+
TestDownloadOnly/v1.20.0/json-events (8.53s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-433203 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-433203 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=kvm2 : (8.525418223s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (8.53s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0925 18:29:35.817206   13239 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I0925 18:29:35.817296   13239 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19681-6065/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-433203
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-433203: exit status 85 (54.548281ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-433203 | jenkins | v1.34.0 | 25 Sep 24 18:29 UTC |          |
	|         | -p download-only-433203        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/25 18:29:27
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0925 18:29:27.329009   13251 out.go:345] Setting OutFile to fd 1 ...
	I0925 18:29:27.329116   13251 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 18:29:27.329125   13251 out.go:358] Setting ErrFile to fd 2...
	I0925 18:29:27.329129   13251 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 18:29:27.329306   13251 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19681-6065/.minikube/bin
	W0925 18:29:27.329427   13251 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19681-6065/.minikube/config/config.json: open /home/jenkins/minikube-integration/19681-6065/.minikube/config/config.json: no such file or directory
	I0925 18:29:27.330021   13251 out.go:352] Setting JSON to true
	I0925 18:29:27.330876   13251 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":710,"bootTime":1727288257,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0925 18:29:27.330962   13251 start.go:139] virtualization: kvm guest
	I0925 18:29:27.333271   13251 out.go:97] [download-only-433203] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0925 18:29:27.333385   13251 notify.go:220] Checking for updates...
	W0925 18:29:27.333379   13251 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19681-6065/.minikube/cache/preloaded-tarball: no such file or directory
	I0925 18:29:27.334743   13251 out.go:169] MINIKUBE_LOCATION=19681
	I0925 18:29:27.336276   13251 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 18:29:27.337582   13251 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19681-6065/kubeconfig
	I0925 18:29:27.338910   13251 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19681-6065/.minikube
	I0925 18:29:27.340180   13251 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0925 18:29:27.342736   13251 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0925 18:29:27.342994   13251 driver.go:394] Setting default libvirt URI to qemu:///system
	I0925 18:29:27.450009   13251 out.go:97] Using the kvm2 driver based on user configuration
	I0925 18:29:27.450034   13251 start.go:297] selected driver: kvm2
	I0925 18:29:27.450042   13251 start.go:901] validating driver "kvm2" against <nil>
	I0925 18:29:27.450501   13251 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 18:29:27.450659   13251 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19681-6065/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0925 18:29:27.465822   13251 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0925 18:29:27.465887   13251 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0925 18:29:27.466447   13251 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0925 18:29:27.466601   13251 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0925 18:29:27.466627   13251 cni.go:84] Creating CNI manager for ""
	I0925 18:29:27.466678   13251 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0925 18:29:27.466733   13251 start.go:340] cluster config:
	{Name:download-only-433203 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-433203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0925 18:29:27.466901   13251 iso.go:125] acquiring lock: {Name:mkac644039a90c04558d628f48440edffcc827c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 18:29:27.469053   13251 out.go:97] Downloading VM boot image ...
	I0925 18:29:27.469097   13251 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19681-6065/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0925 18:29:31.023978   13251 out.go:97] Starting "download-only-433203" primary control-plane node in "download-only-433203" cluster
	I0925 18:29:31.023998   13251 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0925 18:29:31.043934   13251 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0925 18:29:31.043969   13251 cache.go:56] Caching tarball of preloaded images
	I0925 18:29:31.044144   13251 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0925 18:29:31.046150   13251 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0925 18:29:31.046168   13251 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0925 18:29:31.076196   13251 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /home/jenkins/minikube-integration/19681-6065/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-433203 host does not exist
	  To start a cluster, run: "minikube start -p download-only-433203"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-433203
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (4.96s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-467975 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-467975 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=kvm2 : (4.961136029s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (4.96s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I0925 18:29:41.087998   13239 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0925 18:29:41.088042   13239 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19681-6065/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-467975
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-467975: exit status 85 (56.875927ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-433203 | jenkins | v1.34.0 | 25 Sep 24 18:29 UTC |                     |
	|         | -p download-only-433203        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 25 Sep 24 18:29 UTC | 25 Sep 24 18:29 UTC |
	| delete  | -p download-only-433203        | download-only-433203 | jenkins | v1.34.0 | 25 Sep 24 18:29 UTC | 25 Sep 24 18:29 UTC |
	| start   | -o=json --download-only        | download-only-467975 | jenkins | v1.34.0 | 25 Sep 24 18:29 UTC |                     |
	|         | -p download-only-467975        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/25 18:29:36
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0925 18:29:36.163772   13464 out.go:345] Setting OutFile to fd 1 ...
	I0925 18:29:36.164023   13464 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 18:29:36.164031   13464 out.go:358] Setting ErrFile to fd 2...
	I0925 18:29:36.164036   13464 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 18:29:36.164209   13464 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19681-6065/.minikube/bin
	I0925 18:29:36.164778   13464 out.go:352] Setting JSON to true
	I0925 18:29:36.165670   13464 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":719,"bootTime":1727288257,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0925 18:29:36.165768   13464 start.go:139] virtualization: kvm guest
	I0925 18:29:36.167876   13464 out.go:97] [download-only-467975] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0925 18:29:36.168054   13464 notify.go:220] Checking for updates...
	I0925 18:29:36.169658   13464 out.go:169] MINIKUBE_LOCATION=19681
	I0925 18:29:36.171087   13464 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 18:29:36.172313   13464 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19681-6065/kubeconfig
	I0925 18:29:36.173540   13464 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19681-6065/.minikube
	I0925 18:29:36.174955   13464 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-467975 host does not exist
	  To start a cluster, run: "minikube start -p download-only-467975"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-467975
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.6s)

                                                
                                                
=== RUN   TestBinaryMirror
I0925 18:29:41.656198   13239 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-724330 --alsologtostderr --binary-mirror http://127.0.0.1:41573 --driver=kvm2 
helpers_test.go:175: Cleaning up "binary-mirror-724330" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-724330
--- PASS: TestBinaryMirror (0.60s)

                                                
                                    
x
+
TestOffline (94.31s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-698013 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-698013 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 : (1m33.270837566s)
helpers_test.go:175: Cleaning up "offline-docker-698013" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-698013
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-698013: (1.034645899s)
--- PASS: TestOffline (94.31s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-608075
addons_test.go:975: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-608075: exit status 85 (46.777696ms)

                                                
                                                
-- stdout --
	* Profile "addons-608075" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-608075"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-608075
addons_test.go:986: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-608075: exit status 85 (48.888581ms)

                                                
                                                
-- stdout --
	* Profile "addons-608075" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-608075"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (220.46s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-608075 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --addons=ingress --addons=ingress-dns
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-608075 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --addons=ingress --addons=ingress-dns: (3m40.457158021s)
--- PASS: TestAddons/Setup (220.46s)

                                                
                                    
x
+
TestAddons/serial/Volcano (41.63s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:851: volcano-controller stabilized in 16.341012ms
addons_test.go:835: volcano-scheduler stabilized in 17.168568ms
addons_test.go:843: volcano-admission stabilized in 17.214692ms
addons_test.go:857: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-6c9778cbdf-kssx8" [c563e184-0cf2-4452-af59-63ea4281a190] Running
addons_test.go:857: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.004074859s
addons_test.go:861: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5874dfdd79-2whkf" [9e8dc7a6-7557-4f91-85a0-263179d8625b] Running
addons_test.go:861: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.0038565s
addons_test.go:865: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-789ffc5785-jbk6m" [a7d4c4a7-9841-4a21-96ed-391ab6142b2a] Running
addons_test.go:865: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003923403s
addons_test.go:870: (dbg) Run:  kubectl --context addons-608075 delete -n volcano-system job volcano-admission-init
addons_test.go:876: (dbg) Run:  kubectl --context addons-608075 create -f testdata/vcjob.yaml
addons_test.go:884: (dbg) Run:  kubectl --context addons-608075 get vcjob -n my-volcano
addons_test.go:902: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [95f69f8c-9c67-4371-b05a-6d597e16631d] Pending
helpers_test.go:344: "test-job-nginx-0" [95f69f8c-9c67-4371-b05a-6d597e16631d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [95f69f8c-9c67-4371-b05a-6d597e16631d] Running
addons_test.go:902: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 15.00398864s
addons_test.go:906: (dbg) Run:  out/minikube-linux-amd64 -p addons-608075 addons disable volcano --alsologtostderr -v=1
addons_test.go:906: (dbg) Done: out/minikube-linux-amd64 -p addons-608075 addons disable volcano --alsologtostderr -v=1: (11.173358517s)
--- PASS: TestAddons/serial/Volcano (41.63s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:594: (dbg) Run:  kubectl --context addons-608075 create ns new-namespace
addons_test.go:608: (dbg) Run:  kubectl --context addons-608075 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (22.27s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:205: (dbg) Run:  kubectl --context addons-608075 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:230: (dbg) Run:  kubectl --context addons-608075 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:243: (dbg) Run:  kubectl --context addons-608075 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [2fa666cc-05d8-4d39-99ef-f1bdbfca0027] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [2fa666cc-05d8-4d39-99ef-f1bdbfca0027] Running
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.004177962s
I0925 18:42:30.302150   13239 kapi.go:150] Service nginx in namespace default found.
addons_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p addons-608075 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:284: (dbg) Run:  kubectl --context addons-608075 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p addons-608075 ip
addons_test.go:295: (dbg) Run:  nslookup hello-john.test 192.168.39.81
addons_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p addons-608075 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:304: (dbg) Done: out/minikube-linux-amd64 -p addons-608075 addons disable ingress-dns --alsologtostderr -v=1: (1.582506865s)
addons_test.go:309: (dbg) Run:  out/minikube-linux-amd64 -p addons-608075 addons disable ingress --alsologtostderr -v=1
addons_test.go:309: (dbg) Done: out/minikube-linux-amd64 -p addons-608075 addons disable ingress --alsologtostderr -v=1: (8.278179574s)
--- PASS: TestAddons/parallel/Ingress (22.27s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.21s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-gxdfp" [252ef222-3a85-4c5d-a798-ba73ccdd6e6d] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.008298121s
addons_test.go:789: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-608075
addons_test.go:789: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-608075: (6.198261981s)
--- PASS: TestAddons/parallel/InspektorGadget (11.21s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.99s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:405: metrics-server stabilized in 6.915077ms
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-4qj5z" [cc30322c-3e25-4337-8d51-fade7906b7f0] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.006907857s
addons_test.go:413: (dbg) Run:  kubectl --context addons-608075 top pods -n kube-system
addons_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p addons-608075 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.99s)

                                                
                                    
x
+
TestAddons/parallel/CSI (46.95s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0925 18:42:24.725673   13239 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0925 18:42:24.733538   13239 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0925 18:42:24.733579   13239 kapi.go:107] duration metric: took 7.921943ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:505: csi-hostpath-driver pods stabilized in 7.933368ms
addons_test.go:508: (dbg) Run:  kubectl --context addons-608075 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:513: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608075 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608075 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608075 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608075 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608075 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608075 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608075 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608075 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608075 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:518: (dbg) Run:  kubectl --context addons-608075 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:523: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [fc4e4465-cb00-4a48-945c-0bd3dedd2f53] Pending
helpers_test.go:344: "task-pv-pod" [fc4e4465-cb00-4a48-945c-0bd3dedd2f53] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [fc4e4465-cb00-4a48-945c-0bd3dedd2f53] Running
addons_test.go:523: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.004623809s
addons_test.go:528: (dbg) Run:  kubectl --context addons-608075 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:533: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-608075 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-608075 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:538: (dbg) Run:  kubectl --context addons-608075 delete pod task-pv-pod
addons_test.go:544: (dbg) Run:  kubectl --context addons-608075 delete pvc hpvc
addons_test.go:550: (dbg) Run:  kubectl --context addons-608075 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608075 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608075 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608075 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608075 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608075 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608075 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608075 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608075 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608075 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608075 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608075 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608075 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608075 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:560: (dbg) Run:  kubectl --context addons-608075 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [270992ca-a669-4ef3-b671-7f5331ce625f] Pending
helpers_test.go:344: "task-pv-pod-restore" [270992ca-a669-4ef3-b671-7f5331ce625f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [270992ca-a669-4ef3-b671-7f5331ce625f] Running
addons_test.go:565: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004557316s
addons_test.go:570: (dbg) Run:  kubectl --context addons-608075 delete pod task-pv-pod-restore
addons_test.go:570: (dbg) Done: kubectl --context addons-608075 delete pod task-pv-pod-restore: (1.160921082s)
addons_test.go:574: (dbg) Run:  kubectl --context addons-608075 delete pvc hpvc-restore
addons_test.go:578: (dbg) Run:  kubectl --context addons-608075 delete volumesnapshot new-snapshot-demo
addons_test.go:582: (dbg) Run:  out/minikube-linux-amd64 -p addons-608075 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:582: (dbg) Done: out/minikube-linux-amd64 -p addons-608075 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.809998095s)
addons_test.go:586: (dbg) Run:  out/minikube-linux-amd64 -p addons-608075 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (46.95s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.68s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:768: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-608075 --alsologtostderr -v=1
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-25mkb" [99eb911a-cf13-4fa9-9cc2-87b9f0543c96] Pending
helpers_test.go:344: "headlamp-7b5c95b59d-25mkb" [99eb911a-cf13-4fa9-9cc2-87b9f0543c96] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-25mkb" [99eb911a-cf13-4fa9-9cc2-87b9f0543c96] Running
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.020917217s
addons_test.go:777: (dbg) Run:  out/minikube-linux-amd64 -p addons-608075 addons disable headlamp --alsologtostderr -v=1
addons_test.go:777: (dbg) Done: out/minikube-linux-amd64 -p addons-608075 addons disable headlamp --alsologtostderr -v=1: (5.805572475s)
--- PASS: TestAddons/parallel/Headlamp (17.68s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.68s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-n6fx2" [6dca6a35-a81c-4962-8953-b483a571792f] Running
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004474176s
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-608075
--- PASS: TestAddons/parallel/CloudSpanner (6.68s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (10.04s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:920: (dbg) Run:  kubectl --context addons-608075 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:926: (dbg) Run:  kubectl --context addons-608075 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:930: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608075 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608075 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608075 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608075 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608075 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-608075 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [c13632e5-ed43-4572-ab5b-81170f74f0f6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [c13632e5-ed43-4572-ab5b-81170f74f0f6] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [c13632e5-ed43-4572-ab5b-81170f74f0f6] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003725522s
addons_test.go:938: (dbg) Run:  kubectl --context addons-608075 get pvc test-pvc -o=json
addons_test.go:947: (dbg) Run:  out/minikube-linux-amd64 -p addons-608075 ssh "cat /opt/local-path-provisioner/pvc-268deaac-a2f5-491e-8565-4ab0e1112f3d_default_test-pvc/file1"
addons_test.go:959: (dbg) Run:  kubectl --context addons-608075 delete pod test-local-path
addons_test.go:963: (dbg) Run:  kubectl --context addons-608075 delete pvc test-pvc
addons_test.go:967: (dbg) Run:  out/minikube-linux-amd64 -p addons-608075 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (10.04s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.43s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-zlw5z" [a7be19fd-ff5c-4be3-8829-7a262960e9b1] Running
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.005550813s
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-608075
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.43s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.71s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-7mdsm" [3f87c1d7-2567-4f35-9278-1165073998c4] Running
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.009613225s
addons_test.go:1014: (dbg) Run:  out/minikube-linux-amd64 -p addons-608075 addons disable yakd --alsologtostderr -v=1
addons_test.go:1014: (dbg) Done: out/minikube-linux-amd64 -p addons-608075 addons disable yakd --alsologtostderr -v=1: (5.701292343s)
--- PASS: TestAddons/parallel/Yakd (11.71s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (13.57s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-608075
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-608075: (13.303033222s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-608075
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-608075
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-608075
--- PASS: TestAddons/StoppedEnableDisable (13.57s)

                                                
                                    
x
+
TestCertOptions (96.49s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-170938 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-170938 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 : (1m34.891991059s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-170938 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-170938 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-170938 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-170938" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-170938
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-170938: (1.075036951s)
--- PASS: TestCertOptions (96.49s)

                                                
                                    
x
+
TestCertExpiration (316.71s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-488026 --memory=2048 --cert-expiration=3m --driver=kvm2 
E0925 19:32:21.694858   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/skaffold-816010/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:32:21.701268   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/skaffold-816010/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:32:21.712734   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/skaffold-816010/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:32:21.734220   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/skaffold-816010/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:32:21.775718   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/skaffold-816010/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:32:21.857202   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/skaffold-816010/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:32:22.019198   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/skaffold-816010/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:32:22.340913   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/skaffold-816010/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:32:22.982703   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/skaffold-816010/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:32:24.264177   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/skaffold-816010/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:32:26.826315   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/skaffold-816010/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-488026 --memory=2048 --cert-expiration=3m --driver=kvm2 : (1m17.033712954s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-488026 --memory=2048 --cert-expiration=8760h --driver=kvm2 
E0925 19:36:36.753376   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/gvisor-745680/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:36:36.759868   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/gvisor-745680/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:36:36.771384   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/gvisor-745680/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:36:36.792807   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/gvisor-745680/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:36:36.834849   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/gvisor-745680/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:36:36.916324   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/gvisor-745680/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:36:37.077878   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/gvisor-745680/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:36:37.400151   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/gvisor-745680/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:36:38.042552   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/gvisor-745680/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:36:39.323968   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/gvisor-745680/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-488026 --memory=2048 --cert-expiration=8760h --driver=kvm2 : (58.594498028s)
helpers_test.go:175: Cleaning up "cert-expiration-488026" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-488026
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-488026: (1.08454314s)
--- PASS: TestCertExpiration (316.71s)

                                                
                                    
x
+
TestDockerFlags (95.94s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-136489 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-136489 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 : (1m34.309705877s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-136489 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-136489 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-136489" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-136489
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-136489: (1.119487096s)
--- PASS: TestDockerFlags (95.94s)

                                                
                                    
x
+
TestForceSystemdFlag (72.9s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-491585 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-491585 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 : (1m11.828935805s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-491585 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-491585" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-491585
--- PASS: TestForceSystemdFlag (72.90s)

                                                
                                    
x
+
TestForceSystemdEnv (66.24s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-646735 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 
I0925 19:31:11.101138   13239 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-646735 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 : (1m4.842479516s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-646735 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-646735" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-646735
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-646735: (1.103152961s)
--- PASS: TestForceSystemdEnv (66.24s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.56s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0925 19:31:09.318325   13239 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0925 19:31:09.318515   13239 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0925 19:31:09.349609   13239 install.go:62] docker-machine-driver-kvm2: exit status 1
W0925 19:31:09.350002   13239 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0925 19:31:09.350087   13239 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1241106359/001/docker-machine-driver-kvm2
I0925 19:31:09.614367   13239 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate1241106359/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x466e640 0x466e640 0x466e640 0x466e640 0x466e640 0x466e640 0x466e640] Decompressors:map[bz2:0xc0004abb10 gz:0xc0004abb18 tar:0xc0004abac0 tar.bz2:0xc0004abad0 tar.gz:0xc0004abae0 tar.xz:0xc0004abaf0 tar.zst:0xc0004abb00 tbz2:0xc0004abad0 tgz:0xc0004abae0 txz:0xc0004abaf0 tzst:0xc0004abb00 xz:0xc0004abb20 zip:0xc0004abb30 zst:0xc0004abb28] Getters:map[file:0xc0013fd200 http:0xc000a86820 https:0xc000a86870] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0925 19:31:09.614432   13239 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1241106359/001/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (4.56s)

                                                
                                    
x
+
TestErrorSpam/setup (51.66s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-733546 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-733546 --driver=kvm2 
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-733546 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-733546 --driver=kvm2 : (51.658978106s)
--- PASS: TestErrorSpam/setup (51.66s)

                                                
                                    
x
+
TestErrorSpam/start (0.37s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-733546 --log_dir /tmp/nospam-733546 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-733546 --log_dir /tmp/nospam-733546 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-733546 --log_dir /tmp/nospam-733546 start --dry-run
--- PASS: TestErrorSpam/start (0.37s)

                                                
                                    
x
+
TestErrorSpam/status (0.75s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-733546 --log_dir /tmp/nospam-733546 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-733546 --log_dir /tmp/nospam-733546 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-733546 --log_dir /tmp/nospam-733546 status
--- PASS: TestErrorSpam/status (0.75s)

                                                
                                    
x
+
TestErrorSpam/pause (1.25s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-733546 --log_dir /tmp/nospam-733546 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-733546 --log_dir /tmp/nospam-733546 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-733546 --log_dir /tmp/nospam-733546 pause
--- PASS: TestErrorSpam/pause (1.25s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.48s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-733546 --log_dir /tmp/nospam-733546 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-733546 --log_dir /tmp/nospam-733546 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-733546 --log_dir /tmp/nospam-733546 unpause
--- PASS: TestErrorSpam/unpause (1.48s)

                                                
                                    
x
+
TestErrorSpam/stop (6.01s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-733546 --log_dir /tmp/nospam-733546 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-733546 --log_dir /tmp/nospam-733546 stop: (3.60559535s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-733546 --log_dir /tmp/nospam-733546 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-733546 --log_dir /tmp/nospam-733546 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-733546 --log_dir /tmp/nospam-733546 stop: (1.476734695s)
--- PASS: TestErrorSpam/stop (6.01s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19681-6065/.minikube/files/etc/test/nested/copy/13239/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (63.87s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-641225 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-641225 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 : (1m3.872541443s)
--- PASS: TestFunctional/serial/StartWithProxy (63.87s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (38.27s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0925 18:45:42.070345   13239 config.go:182] Loaded profile config "functional-641225": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-641225 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-641225 --alsologtostderr -v=8: (38.26842355s)
functional_test.go:663: soft start took 38.269117269s for "functional-641225" cluster.
I0925 18:46:20.339148   13239 config.go:182] Loaded profile config "functional-641225": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (38.27s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-641225 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.42s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.42s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.99s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-641225 /tmp/TestFunctionalserialCacheCmdcacheadd_local3851475387/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 cache add minikube-local-cache-test:functional-641225
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 cache delete minikube-local-cache-test:functional-641225
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-641225
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.99s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-641225 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (210.120815ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 kubectl -- --context functional-641225 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-641225 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (40.65s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-641225 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-641225 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (40.647167497s)
functional_test.go:761: restart took 40.647278169s for "functional-641225" cluster.
I0925 18:47:06.317866   13239 config.go:182] Loaded profile config "functional-641225": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (40.65s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-641225 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.03s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-641225 logs: (1.02966477s)
--- PASS: TestFunctional/serial/LogsCmd (1.03s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.06s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 logs --file /tmp/TestFunctionalserialLogsFileCmd1738576765/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-641225 logs --file /tmp/TestFunctionalserialLogsFileCmd1738576765/001/logs.txt: (1.060649478s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.06s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.88s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-641225 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-641225
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-641225: exit status 115 (267.347621ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.84:32359 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-641225 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-641225 delete -f testdata/invalidsvc.yaml: (1.417373525s)
--- PASS: TestFunctional/serial/InvalidService (4.88s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-641225 config get cpus: exit status 14 (62.280228ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-641225 config get cpus: exit status 14 (53.855834ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (38.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-641225 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-641225 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 23058: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (38.37s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-641225 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-641225 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (173.637699ms)

                                                
                                                
-- stdout --
	* [functional-641225] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19681
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19681-6065/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19681-6065/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 18:47:26.888159   22625 out.go:345] Setting OutFile to fd 1 ...
	I0925 18:47:26.888296   22625 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 18:47:26.888339   22625 out.go:358] Setting ErrFile to fd 2...
	I0925 18:47:26.888355   22625 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 18:47:26.888548   22625 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19681-6065/.minikube/bin
	I0925 18:47:26.889199   22625 out.go:352] Setting JSON to false
	I0925 18:47:26.890325   22625 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":1790,"bootTime":1727288257,"procs":254,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0925 18:47:26.890464   22625 start.go:139] virtualization: kvm guest
	I0925 18:47:26.892737   22625 out.go:177] * [functional-641225] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0925 18:47:26.896435   22625 notify.go:220] Checking for updates...
	I0925 18:47:26.896487   22625 out.go:177]   - MINIKUBE_LOCATION=19681
	I0925 18:47:26.898164   22625 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 18:47:26.899768   22625 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19681-6065/kubeconfig
	I0925 18:47:26.900953   22625 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19681-6065/.minikube
	I0925 18:47:26.902225   22625 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0925 18:47:26.903850   22625 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 18:47:26.905749   22625 config.go:182] Loaded profile config "functional-641225": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0925 18:47:26.906390   22625 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 18:47:26.906453   22625 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 18:47:26.932626   22625 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41215
	I0925 18:47:26.933135   22625 main.go:141] libmachine: () Calling .GetVersion
	I0925 18:47:26.933744   22625 main.go:141] libmachine: Using API Version  1
	I0925 18:47:26.933767   22625 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 18:47:26.934583   22625 main.go:141] libmachine: () Calling .GetMachineName
	I0925 18:47:26.934794   22625 main.go:141] libmachine: (functional-641225) Calling .DriverName
	I0925 18:47:26.935035   22625 driver.go:394] Setting default libvirt URI to qemu:///system
	I0925 18:47:26.935467   22625 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 18:47:26.935514   22625 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 18:47:26.956469   22625 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37079
	I0925 18:47:26.956928   22625 main.go:141] libmachine: () Calling .GetVersion
	I0925 18:47:26.957530   22625 main.go:141] libmachine: Using API Version  1
	I0925 18:47:26.957554   22625 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 18:47:26.958169   22625 main.go:141] libmachine: () Calling .GetMachineName
	I0925 18:47:26.958358   22625 main.go:141] libmachine: (functional-641225) Calling .DriverName
	I0925 18:47:26.994763   22625 out.go:177] * Using the kvm2 driver based on existing profile
	I0925 18:47:26.996061   22625 start.go:297] selected driver: kvm2
	I0925 18:47:26.996086   22625 start.go:901] validating driver "kvm2" against &{Name:functional-641225 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-641225 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.84 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0925 18:47:26.996240   22625 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 18:47:26.998790   22625 out.go:201] 
	W0925 18:47:27.000230   22625 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0925 18:47:27.001541   22625 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-641225 --dry-run --alsologtostderr -v=1 --driver=kvm2 
--- PASS: TestFunctional/parallel/DryRun (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-641225 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-641225 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (150.01859ms)

                                                
                                                
-- stdout --
	* [functional-641225] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19681
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19681-6065/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19681-6065/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 18:47:26.728215   22586 out.go:345] Setting OutFile to fd 1 ...
	I0925 18:47:26.728337   22586 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 18:47:26.728347   22586 out.go:358] Setting ErrFile to fd 2...
	I0925 18:47:26.728352   22586 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 18:47:26.728646   22586 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19681-6065/.minikube/bin
	I0925 18:47:26.729205   22586 out.go:352] Setting JSON to false
	I0925 18:47:26.730291   22586 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":1790,"bootTime":1727288257,"procs":253,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0925 18:47:26.730392   22586 start.go:139] virtualization: kvm guest
	I0925 18:47:26.733064   22586 out.go:177] * [functional-641225] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I0925 18:47:26.734643   22586 out.go:177]   - MINIKUBE_LOCATION=19681
	I0925 18:47:26.734741   22586 notify.go:220] Checking for updates...
	I0925 18:47:26.737112   22586 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 18:47:26.738461   22586 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19681-6065/kubeconfig
	I0925 18:47:26.741490   22586 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19681-6065/.minikube
	I0925 18:47:26.742929   22586 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0925 18:47:26.744249   22586 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 18:47:26.746200   22586 config.go:182] Loaded profile config "functional-641225": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0925 18:47:26.746816   22586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 18:47:26.746866   22586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 18:47:26.763080   22586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37093
	I0925 18:47:26.763704   22586 main.go:141] libmachine: () Calling .GetVersion
	I0925 18:47:26.764357   22586 main.go:141] libmachine: Using API Version  1
	I0925 18:47:26.764380   22586 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 18:47:26.764673   22586 main.go:141] libmachine: () Calling .GetMachineName
	I0925 18:47:26.764847   22586 main.go:141] libmachine: (functional-641225) Calling .DriverName
	I0925 18:47:26.765069   22586 driver.go:394] Setting default libvirt URI to qemu:///system
	I0925 18:47:26.765499   22586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 18:47:26.765536   22586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 18:47:26.783815   22586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36713
	I0925 18:47:26.784216   22586 main.go:141] libmachine: () Calling .GetVersion
	I0925 18:47:26.784750   22586 main.go:141] libmachine: Using API Version  1
	I0925 18:47:26.784772   22586 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 18:47:26.785128   22586 main.go:141] libmachine: () Calling .GetMachineName
	I0925 18:47:26.785352   22586 main.go:141] libmachine: (functional-641225) Calling .DriverName
	I0925 18:47:26.821248   22586 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0925 18:47:26.822856   22586 start.go:297] selected driver: kvm2
	I0925 18:47:26.822873   22586 start.go:901] validating driver "kvm2" against &{Name:functional-641225 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-641225 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.84 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0925 18:47:26.822992   22586 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 18:47:26.825899   22586 out.go:201] 
	W0925 18:47:26.827556   22586 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0925 18:47:26.828793   22586 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-641225 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-641225 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-x2lqm" [e7f82a5b-d2bf-49b3-9ee4-6e808cff51c7] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-x2lqm" [e7f82a5b-d2bf-49b3-9ee4-6e808cff51c7] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.004106427s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.84:30967
functional_test.go:1675: http://192.168.39.84:30967: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-x2lqm

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.84:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.84:30967
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.60s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (49.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [037d546a-5e0a-4b7a-a44b-f602e0de8947] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004488387s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-641225 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-641225 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-641225 get pvc myclaim -o=json
I0925 18:47:20.090525   13239 retry.go:31] will retry after 1.950797093s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:eb1aed5e-d92b-4f99-af63-5b3854506bce ResourceVersion:741 Generation:0 CreationTimestamp:2024-09-25 18:47:19 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001bae290 VolumeMode:0xc001bae2a0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-641225 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-641225 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [d13e8285-97dd-467b-9211-1a00df980e3e] Pending
helpers_test.go:344: "sp-pod" [d13e8285-97dd-467b-9211-1a00df980e3e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [d13e8285-97dd-467b-9211-1a00df980e3e] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 16.005100498s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-641225 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-641225 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-641225 delete -f testdata/storage-provisioner/pod.yaml: (2.200816534s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-641225 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [a1f86559-5709-408b-b985-771d77812ce7] Pending
helpers_test.go:344: "sp-pod" [a1f86559-5709-408b-b985-771d77812ce7] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [a1f86559-5709-408b-b985-771d77812ce7] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 22.004446017s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-641225 exec sp-pod -- ls /tmp/mount
2024/09/25 18:48:04 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (49.13s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 ssh -n functional-641225 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 cp functional-641225:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd942773980/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 ssh -n functional-641225 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 ssh -n functional-641225 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (32.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-641225 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-z2vwd" [0d72ca43-4500-4c76-b02e-8d9c6ea25ad3] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-z2vwd" [0d72ca43-4500-4c76-b02e-8d9c6ea25ad3] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 27.012146469s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-641225 exec mysql-6cdb49bbb-z2vwd -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-641225 exec mysql-6cdb49bbb-z2vwd -- mysql -ppassword -e "show databases;": exit status 1 (354.726372ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0925 18:47:55.693816   13239 retry.go:31] will retry after 612.739357ms: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-641225 exec mysql-6cdb49bbb-z2vwd -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-641225 exec mysql-6cdb49bbb-z2vwd -- mysql -ppassword -e "show databases;": exit status 1 (293.363228ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0925 18:47:56.600390   13239 retry.go:31] will retry after 1.266213936s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-641225 exec mysql-6cdb49bbb-z2vwd -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-641225 exec mysql-6cdb49bbb-z2vwd -- mysql -ppassword -e "show databases;": exit status 1 (147.833644ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0925 18:47:58.015292   13239 retry.go:31] will retry after 2.928423682s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-641225 exec mysql-6cdb49bbb-z2vwd -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (32.94s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/13239/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 ssh "sudo cat /etc/test/nested/copy/13239/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/13239.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 ssh "sudo cat /etc/ssl/certs/13239.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/13239.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 ssh "sudo cat /usr/share/ca-certificates/13239.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/132392.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 ssh "sudo cat /etc/ssl/certs/132392.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/132392.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 ssh "sudo cat /usr/share/ca-certificates/132392.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-641225 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-641225 ssh "sudo systemctl is-active crio": exit status 1 (265.839299ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-641225 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-641225 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-fmjgv" [6a3e86ef-ea3f-4c6d-b3d3-74c500409a8c] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-fmjgv" [6a3e86ef-ea3f-4c6d-b3d3-74c500409a8c] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.003846171s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.24s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "316.094729ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "58.025235ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "334.972838ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "46.225719ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-641225 /tmp/TestFunctionalparallelMountCmdany-port2896277998/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1727290035834486436" to /tmp/TestFunctionalparallelMountCmdany-port2896277998/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1727290035834486436" to /tmp/TestFunctionalparallelMountCmdany-port2896277998/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1727290035834486436" to /tmp/TestFunctionalparallelMountCmdany-port2896277998/001/test-1727290035834486436
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-641225 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (205.524662ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0925 18:47:16.040298   13239 retry.go:31] will retry after 469.607811ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 25 18:47 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 25 18:47 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 25 18:47 test-1727290035834486436
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 ssh cat /mount-9p/test-1727290035834486436
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-641225 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [a5385b5a-1a6d-4804-b6d8-5d255535cf7c] Pending
helpers_test.go:344: "busybox-mount" [a5385b5a-1a6d-4804-b6d8-5d255535cf7c] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [a5385b5a-1a6d-4804-b6d8-5d255535cf7c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [a5385b5a-1a6d-4804-b6d8-5d255535cf7c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003607367s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-641225 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-641225 /tmp/TestFunctionalparallelMountCmdany-port2896277998/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-641225 /tmp/TestFunctionalparallelMountCmdspecific-port1551594610/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-641225 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (189.737419ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0925 18:47:23.401523   13239 retry.go:31] will retry after 737.287ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-641225 /tmp/TestFunctionalparallelMountCmdspecific-port1551594610/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-641225 ssh "sudo umount -f /mount-9p": exit status 1 (193.674365ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-641225 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-641225 /tmp/TestFunctionalparallelMountCmdspecific-port1551594610/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.93s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-641225 /tmp/TestFunctionalparallelMountCmdVerifyCleanup959263349/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-641225 /tmp/TestFunctionalparallelMountCmdVerifyCleanup959263349/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-641225 /tmp/TestFunctionalparallelMountCmdVerifyCleanup959263349/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-641225 ssh "findmnt -T" /mount1: exit status 1 (287.1449ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0925 18:47:25.424971   13239 retry.go:31] will retry after 709.405938ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-641225 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-641225 /tmp/TestFunctionalparallelMountCmdVerifyCleanup959263349/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-641225 /tmp/TestFunctionalparallelMountCmdVerifyCleanup959263349/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-641225 /tmp/TestFunctionalparallelMountCmdVerifyCleanup959263349/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.76s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 service list -o json
functional_test.go:1494: Took "304.082103ms" to run "out/minikube-linux-amd64 -p functional-641225 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.84:31446
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-641225 docker-env) && out/minikube-linux-amd64 status -p functional-641225"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-641225 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.84:31446
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-641225 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/minikube-local-cache-test:functional-641225
docker.io/kicbase/echo-server:functional-641225
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-641225 image ls --format short --alsologtostderr:
I0925 18:47:36.247890   23504 out.go:345] Setting OutFile to fd 1 ...
I0925 18:47:36.248142   23504 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0925 18:47:36.248153   23504 out.go:358] Setting ErrFile to fd 2...
I0925 18:47:36.248157   23504 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0925 18:47:36.248419   23504 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19681-6065/.minikube/bin
I0925 18:47:36.249028   23504 config.go:182] Loaded profile config "functional-641225": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0925 18:47:36.249144   23504 config.go:182] Loaded profile config "functional-641225": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0925 18:47:36.249535   23504 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0925 18:47:36.249600   23504 main.go:141] libmachine: Launching plugin server for driver kvm2
I0925 18:47:36.264562   23504 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34973
I0925 18:47:36.265057   23504 main.go:141] libmachine: () Calling .GetVersion
I0925 18:47:36.265602   23504 main.go:141] libmachine: Using API Version  1
I0925 18:47:36.265624   23504 main.go:141] libmachine: () Calling .SetConfigRaw
I0925 18:47:36.266006   23504 main.go:141] libmachine: () Calling .GetMachineName
I0925 18:47:36.266197   23504 main.go:141] libmachine: (functional-641225) Calling .GetState
I0925 18:47:36.268028   23504 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0925 18:47:36.268066   23504 main.go:141] libmachine: Launching plugin server for driver kvm2
I0925 18:47:36.282856   23504 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43183
I0925 18:47:36.283373   23504 main.go:141] libmachine: () Calling .GetVersion
I0925 18:47:36.283829   23504 main.go:141] libmachine: Using API Version  1
I0925 18:47:36.283853   23504 main.go:141] libmachine: () Calling .SetConfigRaw
I0925 18:47:36.284162   23504 main.go:141] libmachine: () Calling .GetMachineName
I0925 18:47:36.284335   23504 main.go:141] libmachine: (functional-641225) Calling .DriverName
I0925 18:47:36.284515   23504 ssh_runner.go:195] Run: systemctl --version
I0925 18:47:36.284547   23504 main.go:141] libmachine: (functional-641225) Calling .GetSSHHostname
I0925 18:47:36.287359   23504 main.go:141] libmachine: (functional-641225) DBG | domain functional-641225 has defined MAC address 52:54:00:e9:0e:a5 in network mk-functional-641225
I0925 18:47:36.287807   23504 main.go:141] libmachine: (functional-641225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:0e:a5", ip: ""} in network mk-functional-641225: {Iface:virbr1 ExpiryTime:2024-09-25 19:44:53 +0000 UTC Type:0 Mac:52:54:00:e9:0e:a5 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:functional-641225 Clientid:01:52:54:00:e9:0e:a5}
I0925 18:47:36.287831   23504 main.go:141] libmachine: (functional-641225) DBG | domain functional-641225 has defined IP address 192.168.39.84 and MAC address 52:54:00:e9:0e:a5 in network mk-functional-641225
I0925 18:47:36.287980   23504 main.go:141] libmachine: (functional-641225) Calling .GetSSHPort
I0925 18:47:36.288173   23504 main.go:141] libmachine: (functional-641225) Calling .GetSSHKeyPath
I0925 18:47:36.288311   23504 main.go:141] libmachine: (functional-641225) Calling .GetSSHUsername
I0925 18:47:36.288446   23504 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19681-6065/.minikube/machines/functional-641225/id_rsa Username:docker}
I0925 18:47:36.372722   23504 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0925 18:47:36.402788   23504 main.go:141] libmachine: Making call to close driver server
I0925 18:47:36.402804   23504 main.go:141] libmachine: (functional-641225) Calling .Close
I0925 18:47:36.403152   23504 main.go:141] libmachine: Successfully made call to close driver server
I0925 18:47:36.403172   23504 main.go:141] libmachine: Making call to close connection to plugin binary
I0925 18:47:36.403187   23504 main.go:141] libmachine: Making call to close driver server
I0925 18:47:36.403196   23504 main.go:141] libmachine: (functional-641225) Calling .Close
I0925 18:47:36.403421   23504 main.go:141] libmachine: Successfully made call to close driver server
I0925 18:47:36.403443   23504 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-641225 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| localhost/my-image                          | functional-641225 | a7e3b33cce014 | 1.24MB |
| registry.k8s.io/kube-proxy                  | v1.31.1           | 60c005f310ff3 | 91.5MB |
| docker.io/library/nginx                     | latest            | 39286ab8a5e14 | 188MB  |
| registry.k8s.io/kube-scheduler              | v1.31.1           | 9aa1fad941575 | 67.4MB |
| registry.k8s.io/kube-controller-manager     | v1.31.1           | 175ffd71cce3d | 88.4MB |
| registry.k8s.io/etcd                        | 3.5.15-0          | 2e96e5913fc06 | 148MB  |
| docker.io/library/minikube-local-cache-test | functional-641225 | cbf7ea2a10c80 | 30B    |
| docker.io/kicbase/echo-server               | functional-641225 | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| registry.k8s.io/kube-apiserver              | v1.31.1           | 6bab7719df100 | 94.2MB |
| registry.k8s.io/coredns/coredns             | v1.11.3           | c69fa2e9cbf5f | 61.8MB |
| registry.k8s.io/pause                       | 3.10              | 873ed75102791 | 736kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-641225 image ls --format table --alsologtostderr:
I0925 18:47:40.370931   23688 out.go:345] Setting OutFile to fd 1 ...
I0925 18:47:40.371161   23688 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0925 18:47:40.371168   23688 out.go:358] Setting ErrFile to fd 2...
I0925 18:47:40.371173   23688 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0925 18:47:40.371457   23688 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19681-6065/.minikube/bin
I0925 18:47:40.372129   23688 config.go:182] Loaded profile config "functional-641225": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0925 18:47:40.372243   23688 config.go:182] Loaded profile config "functional-641225": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0925 18:47:40.372708   23688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0925 18:47:40.372760   23688 main.go:141] libmachine: Launching plugin server for driver kvm2
I0925 18:47:40.387508   23688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38497
I0925 18:47:40.387932   23688 main.go:141] libmachine: () Calling .GetVersion
I0925 18:47:40.388540   23688 main.go:141] libmachine: Using API Version  1
I0925 18:47:40.388571   23688 main.go:141] libmachine: () Calling .SetConfigRaw
I0925 18:47:40.388906   23688 main.go:141] libmachine: () Calling .GetMachineName
I0925 18:47:40.389078   23688 main.go:141] libmachine: (functional-641225) Calling .GetState
I0925 18:47:40.391041   23688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0925 18:47:40.391082   23688 main.go:141] libmachine: Launching plugin server for driver kvm2
I0925 18:47:40.405893   23688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43549
I0925 18:47:40.406433   23688 main.go:141] libmachine: () Calling .GetVersion
I0925 18:47:40.406877   23688 main.go:141] libmachine: Using API Version  1
I0925 18:47:40.406896   23688 main.go:141] libmachine: () Calling .SetConfigRaw
I0925 18:47:40.407271   23688 main.go:141] libmachine: () Calling .GetMachineName
I0925 18:47:40.407480   23688 main.go:141] libmachine: (functional-641225) Calling .DriverName
I0925 18:47:40.407679   23688 ssh_runner.go:195] Run: systemctl --version
I0925 18:47:40.407707   23688 main.go:141] libmachine: (functional-641225) Calling .GetSSHHostname
I0925 18:47:40.410290   23688 main.go:141] libmachine: (functional-641225) DBG | domain functional-641225 has defined MAC address 52:54:00:e9:0e:a5 in network mk-functional-641225
I0925 18:47:40.410719   23688 main.go:141] libmachine: (functional-641225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:0e:a5", ip: ""} in network mk-functional-641225: {Iface:virbr1 ExpiryTime:2024-09-25 19:44:53 +0000 UTC Type:0 Mac:52:54:00:e9:0e:a5 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:functional-641225 Clientid:01:52:54:00:e9:0e:a5}
I0925 18:47:40.410749   23688 main.go:141] libmachine: (functional-641225) DBG | domain functional-641225 has defined IP address 192.168.39.84 and MAC address 52:54:00:e9:0e:a5 in network mk-functional-641225
I0925 18:47:40.410912   23688 main.go:141] libmachine: (functional-641225) Calling .GetSSHPort
I0925 18:47:40.411085   23688 main.go:141] libmachine: (functional-641225) Calling .GetSSHKeyPath
I0925 18:47:40.411326   23688 main.go:141] libmachine: (functional-641225) Calling .GetSSHUsername
I0925 18:47:40.411489   23688 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19681-6065/.minikube/machines/functional-641225/id_rsa Username:docker}
I0925 18:47:40.492703   23688 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0925 18:47:40.534258   23688 main.go:141] libmachine: Making call to close driver server
I0925 18:47:40.534277   23688 main.go:141] libmachine: (functional-641225) Calling .Close
I0925 18:47:40.534560   23688 main.go:141] libmachine: Successfully made call to close driver server
I0925 18:47:40.534582   23688 main.go:141] libmachine: Making call to close connection to plugin binary
I0925 18:47:40.534599   23688 main.go:141] libmachine: (functional-641225) DBG | Closing plugin on server side
I0925 18:47:40.534662   23688 main.go:141] libmachine: Making call to close driver server
I0925 18:47:40.534675   23688 main.go:141] libmachine: (functional-641225) Calling .Close
I0925 18:47:40.534906   23688 main.go:141] libmachine: Successfully made call to close driver server
I0925 18:47:40.534948   23688 main.go:141] libmachine: (functional-641225) DBG | Closing plugin on server side
I0925 18:47:40.534966   23688 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-641225 image ls --format json --alsologtostderr:
[{"id":"39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"148000000"},{"id":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"91500000"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"61800000"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"736000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","repoDigest
s":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"94200000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-641225"],"size":"4940000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"a7e3b33cce014f744140be8da42a97d5248d23fbe563591ae477e54defa45ec2","repoDigests":[],"repoTags":["localhost/my-image:functional-641225"],"size":"1240000"},{"id":"cbf7ea2a10c800e10575f1bcc48df766d23aab38836dd2d9f3298bf41137f035","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-te
st:functional-641225"],"size":"30"},{"id":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"67400000"},{"id":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"88400000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-641225 image ls --format json --alsologtostderr:
I0925 18:47:40.166446   23664 out.go:345] Setting OutFile to fd 1 ...
I0925 18:47:40.166551   23664 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0925 18:47:40.166561   23664 out.go:358] Setting ErrFile to fd 2...
I0925 18:47:40.166568   23664 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0925 18:47:40.166778   23664 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19681-6065/.minikube/bin
I0925 18:47:40.167426   23664 config.go:182] Loaded profile config "functional-641225": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0925 18:47:40.167540   23664 config.go:182] Loaded profile config "functional-641225": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0925 18:47:40.167905   23664 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0925 18:47:40.167955   23664 main.go:141] libmachine: Launching plugin server for driver kvm2
I0925 18:47:40.182596   23664 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34227
I0925 18:47:40.183134   23664 main.go:141] libmachine: () Calling .GetVersion
I0925 18:47:40.183678   23664 main.go:141] libmachine: Using API Version  1
I0925 18:47:40.183693   23664 main.go:141] libmachine: () Calling .SetConfigRaw
I0925 18:47:40.184038   23664 main.go:141] libmachine: () Calling .GetMachineName
I0925 18:47:40.184201   23664 main.go:141] libmachine: (functional-641225) Calling .GetState
I0925 18:47:40.186224   23664 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0925 18:47:40.186277   23664 main.go:141] libmachine: Launching plugin server for driver kvm2
I0925 18:47:40.201092   23664 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39867
I0925 18:47:40.201596   23664 main.go:141] libmachine: () Calling .GetVersion
I0925 18:47:40.202035   23664 main.go:141] libmachine: Using API Version  1
I0925 18:47:40.202066   23664 main.go:141] libmachine: () Calling .SetConfigRaw
I0925 18:47:40.202421   23664 main.go:141] libmachine: () Calling .GetMachineName
I0925 18:47:40.202616   23664 main.go:141] libmachine: (functional-641225) Calling .DriverName
I0925 18:47:40.202908   23664 ssh_runner.go:195] Run: systemctl --version
I0925 18:47:40.202933   23664 main.go:141] libmachine: (functional-641225) Calling .GetSSHHostname
I0925 18:47:40.205852   23664 main.go:141] libmachine: (functional-641225) DBG | domain functional-641225 has defined MAC address 52:54:00:e9:0e:a5 in network mk-functional-641225
I0925 18:47:40.206278   23664 main.go:141] libmachine: (functional-641225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:0e:a5", ip: ""} in network mk-functional-641225: {Iface:virbr1 ExpiryTime:2024-09-25 19:44:53 +0000 UTC Type:0 Mac:52:54:00:e9:0e:a5 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:functional-641225 Clientid:01:52:54:00:e9:0e:a5}
I0925 18:47:40.206315   23664 main.go:141] libmachine: (functional-641225) DBG | domain functional-641225 has defined IP address 192.168.39.84 and MAC address 52:54:00:e9:0e:a5 in network mk-functional-641225
I0925 18:47:40.206461   23664 main.go:141] libmachine: (functional-641225) Calling .GetSSHPort
I0925 18:47:40.206656   23664 main.go:141] libmachine: (functional-641225) Calling .GetSSHKeyPath
I0925 18:47:40.206823   23664 main.go:141] libmachine: (functional-641225) Calling .GetSSHUsername
I0925 18:47:40.206950   23664 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19681-6065/.minikube/machines/functional-641225/id_rsa Username:docker}
I0925 18:47:40.285079   23664 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0925 18:47:40.320657   23664 main.go:141] libmachine: Making call to close driver server
I0925 18:47:40.320675   23664 main.go:141] libmachine: (functional-641225) Calling .Close
I0925 18:47:40.320977   23664 main.go:141] libmachine: Successfully made call to close driver server
I0925 18:47:40.320996   23664 main.go:141] libmachine: Making call to close connection to plugin binary
I0925 18:47:40.321004   23664 main.go:141] libmachine: (functional-641225) DBG | Closing plugin on server side
I0925 18:47:40.321008   23664 main.go:141] libmachine: Making call to close driver server
I0925 18:47:40.321018   23664 main.go:141] libmachine: (functional-641225) Calling .Close
I0925 18:47:40.321258   23664 main.go:141] libmachine: Successfully made call to close driver server
I0925 18:47:40.321269   23664 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-641225 image ls --format yaml --alsologtostderr:
- id: 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "67400000"
- id: 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "91500000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "94200000"
- id: 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "88400000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-641225
size: "4940000"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "736000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: cbf7ea2a10c800e10575f1bcc48df766d23aab38836dd2d9f3298bf41137f035
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-641225
size: "30"
- id: 39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "148000000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61800000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-641225 image ls --format yaml --alsologtostderr:
I0925 18:47:36.453944   23527 out.go:345] Setting OutFile to fd 1 ...
I0925 18:47:36.454059   23527 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0925 18:47:36.454068   23527 out.go:358] Setting ErrFile to fd 2...
I0925 18:47:36.454072   23527 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0925 18:47:36.454276   23527 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19681-6065/.minikube/bin
I0925 18:47:36.454888   23527 config.go:182] Loaded profile config "functional-641225": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0925 18:47:36.455002   23527 config.go:182] Loaded profile config "functional-641225": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0925 18:47:36.455391   23527 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0925 18:47:36.455445   23527 main.go:141] libmachine: Launching plugin server for driver kvm2
I0925 18:47:36.470766   23527 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46621
I0925 18:47:36.471364   23527 main.go:141] libmachine: () Calling .GetVersion
I0925 18:47:36.472001   23527 main.go:141] libmachine: Using API Version  1
I0925 18:47:36.472024   23527 main.go:141] libmachine: () Calling .SetConfigRaw
I0925 18:47:36.472420   23527 main.go:141] libmachine: () Calling .GetMachineName
I0925 18:47:36.472646   23527 main.go:141] libmachine: (functional-641225) Calling .GetState
I0925 18:47:36.474776   23527 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0925 18:47:36.474820   23527 main.go:141] libmachine: Launching plugin server for driver kvm2
I0925 18:47:36.490187   23527 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40899
I0925 18:47:36.490694   23527 main.go:141] libmachine: () Calling .GetVersion
I0925 18:47:36.491314   23527 main.go:141] libmachine: Using API Version  1
I0925 18:47:36.491340   23527 main.go:141] libmachine: () Calling .SetConfigRaw
I0925 18:47:36.491645   23527 main.go:141] libmachine: () Calling .GetMachineName
I0925 18:47:36.491814   23527 main.go:141] libmachine: (functional-641225) Calling .DriverName
I0925 18:47:36.492007   23527 ssh_runner.go:195] Run: systemctl --version
I0925 18:47:36.492034   23527 main.go:141] libmachine: (functional-641225) Calling .GetSSHHostname
I0925 18:47:36.494707   23527 main.go:141] libmachine: (functional-641225) DBG | domain functional-641225 has defined MAC address 52:54:00:e9:0e:a5 in network mk-functional-641225
I0925 18:47:36.495042   23527 main.go:141] libmachine: (functional-641225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:0e:a5", ip: ""} in network mk-functional-641225: {Iface:virbr1 ExpiryTime:2024-09-25 19:44:53 +0000 UTC Type:0 Mac:52:54:00:e9:0e:a5 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:functional-641225 Clientid:01:52:54:00:e9:0e:a5}
I0925 18:47:36.495069   23527 main.go:141] libmachine: (functional-641225) DBG | domain functional-641225 has defined IP address 192.168.39.84 and MAC address 52:54:00:e9:0e:a5 in network mk-functional-641225
I0925 18:47:36.495193   23527 main.go:141] libmachine: (functional-641225) Calling .GetSSHPort
I0925 18:47:36.495347   23527 main.go:141] libmachine: (functional-641225) Calling .GetSSHKeyPath
I0925 18:47:36.495516   23527 main.go:141] libmachine: (functional-641225) Calling .GetSSHUsername
I0925 18:47:36.495645   23527 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19681-6065/.minikube/machines/functional-641225/id_rsa Username:docker}
I0925 18:47:36.573315   23527 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0925 18:47:36.606984   23527 main.go:141] libmachine: Making call to close driver server
I0925 18:47:36.607020   23527 main.go:141] libmachine: (functional-641225) Calling .Close
I0925 18:47:36.607277   23527 main.go:141] libmachine: Successfully made call to close driver server
I0925 18:47:36.607295   23527 main.go:141] libmachine: Making call to close connection to plugin binary
I0925 18:47:36.607306   23527 main.go:141] libmachine: (functional-641225) DBG | Closing plugin on server side
I0925 18:47:36.607308   23527 main.go:141] libmachine: Making call to close driver server
I0925 18:47:36.607329   23527 main.go:141] libmachine: (functional-641225) Calling .Close
I0925 18:47:36.607534   23527 main.go:141] libmachine: Successfully made call to close driver server
I0925 18:47:36.607550   23527 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-641225 ssh pgrep buildkitd: exit status 1 (185.112531ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 image build -t localhost/my-image:functional-641225 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-641225 image build -t localhost/my-image:functional-641225 testdata/build --alsologtostderr: (3.127301804s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-641225 image build -t localhost/my-image:functional-641225 testdata/build --alsologtostderr:
I0925 18:47:36.839106   23579 out.go:345] Setting OutFile to fd 1 ...
I0925 18:47:36.839264   23579 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0925 18:47:36.839273   23579 out.go:358] Setting ErrFile to fd 2...
I0925 18:47:36.839278   23579 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0925 18:47:36.839451   23579 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19681-6065/.minikube/bin
I0925 18:47:36.840014   23579 config.go:182] Loaded profile config "functional-641225": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0925 18:47:36.840539   23579 config.go:182] Loaded profile config "functional-641225": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0925 18:47:36.840900   23579 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0925 18:47:36.840950   23579 main.go:141] libmachine: Launching plugin server for driver kvm2
I0925 18:47:36.856812   23579 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40123
I0925 18:47:36.857208   23579 main.go:141] libmachine: () Calling .GetVersion
I0925 18:47:36.857837   23579 main.go:141] libmachine: Using API Version  1
I0925 18:47:36.857873   23579 main.go:141] libmachine: () Calling .SetConfigRaw
I0925 18:47:36.858207   23579 main.go:141] libmachine: () Calling .GetMachineName
I0925 18:47:36.858380   23579 main.go:141] libmachine: (functional-641225) Calling .GetState
I0925 18:47:36.860311   23579 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0925 18:47:36.860360   23579 main.go:141] libmachine: Launching plugin server for driver kvm2
I0925 18:47:36.875149   23579 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45133
I0925 18:47:36.875620   23579 main.go:141] libmachine: () Calling .GetVersion
I0925 18:47:36.876107   23579 main.go:141] libmachine: Using API Version  1
I0925 18:47:36.876133   23579 main.go:141] libmachine: () Calling .SetConfigRaw
I0925 18:47:36.876454   23579 main.go:141] libmachine: () Calling .GetMachineName
I0925 18:47:36.876647   23579 main.go:141] libmachine: (functional-641225) Calling .DriverName
I0925 18:47:36.876858   23579 ssh_runner.go:195] Run: systemctl --version
I0925 18:47:36.876896   23579 main.go:141] libmachine: (functional-641225) Calling .GetSSHHostname
I0925 18:47:36.879656   23579 main.go:141] libmachine: (functional-641225) DBG | domain functional-641225 has defined MAC address 52:54:00:e9:0e:a5 in network mk-functional-641225
I0925 18:47:36.880060   23579 main.go:141] libmachine: (functional-641225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:0e:a5", ip: ""} in network mk-functional-641225: {Iface:virbr1 ExpiryTime:2024-09-25 19:44:53 +0000 UTC Type:0 Mac:52:54:00:e9:0e:a5 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:functional-641225 Clientid:01:52:54:00:e9:0e:a5}
I0925 18:47:36.880093   23579 main.go:141] libmachine: (functional-641225) DBG | domain functional-641225 has defined IP address 192.168.39.84 and MAC address 52:54:00:e9:0e:a5 in network mk-functional-641225
I0925 18:47:36.880201   23579 main.go:141] libmachine: (functional-641225) Calling .GetSSHPort
I0925 18:47:36.880354   23579 main.go:141] libmachine: (functional-641225) Calling .GetSSHKeyPath
I0925 18:47:36.880490   23579 main.go:141] libmachine: (functional-641225) Calling .GetSSHUsername
I0925 18:47:36.880566   23579 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19681-6065/.minikube/machines/functional-641225/id_rsa Username:docker}
I0925 18:47:36.961901   23579 build_images.go:161] Building image from path: /tmp/build.2293292469.tar
I0925 18:47:36.961977   23579 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0925 18:47:36.981062   23579 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2293292469.tar
I0925 18:47:36.990019   23579 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2293292469.tar: stat -c "%s %y" /var/lib/minikube/build/build.2293292469.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2293292469.tar': No such file or directory
I0925 18:47:36.990067   23579 ssh_runner.go:362] scp /tmp/build.2293292469.tar --> /var/lib/minikube/build/build.2293292469.tar (3072 bytes)
I0925 18:47:37.031595   23579 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2293292469
I0925 18:47:37.049787   23579 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2293292469 -xf /var/lib/minikube/build/build.2293292469.tar
I0925 18:47:37.060573   23579 docker.go:360] Building image: /var/lib/minikube/build/build.2293292469
I0925 18:47:37.060641   23579 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-641225 /var/lib/minikube/build/build.2293292469
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.6s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#4 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#4 ...

                                                
                                                
#5 [internal] load build context
#5 transferring context: 62B done
#5 DONE 0.1s

                                                
                                                
#4 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#4 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#4 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#4 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#4 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#4 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#4 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.2s done
#4 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#4 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.8s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.2s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 writing image sha256:a7e3b33cce014f744140be8da42a97d5248d23fbe563591ae477e54defa45ec2 done
#8 naming to localhost/my-image:functional-641225 done
#8 DONE 0.1s
I0925 18:47:39.885768   23579 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-641225 /var/lib/minikube/build/build.2293292469: (2.825106952s)
I0925 18:47:39.885839   23579 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2293292469
I0925 18:47:39.902272   23579 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2293292469.tar
I0925 18:47:39.919454   23579 build_images.go:217] Built localhost/my-image:functional-641225 from /tmp/build.2293292469.tar
I0925 18:47:39.919548   23579 build_images.go:133] succeeded building to: functional-641225
I0925 18:47:39.919556   23579 build_images.go:134] failed building to: 
I0925 18:47:39.919585   23579 main.go:141] libmachine: Making call to close driver server
I0925 18:47:39.919600   23579 main.go:141] libmachine: (functional-641225) Calling .Close
I0925 18:47:39.919924   23579 main.go:141] libmachine: Successfully made call to close driver server
I0925 18:47:39.919943   23579 main.go:141] libmachine: Making call to close connection to plugin binary
I0925 18:47:39.919956   23579 main.go:141] libmachine: Making call to close driver server
I0925 18:47:39.919965   23579 main.go:141] libmachine: (functional-641225) Calling .Close
I0925 18:47:39.920203   23579 main.go:141] libmachine: Successfully made call to close driver server
I0925 18:47:39.920219   23579 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.016541702s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-641225
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 image load --daemon kicbase/echo-server:functional-641225 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-641225 image load --daemon kicbase/echo-server:functional-641225 --alsologtostderr: (1.286222566s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 image load --daemon kicbase/echo-server:functional-641225 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-641225
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 image load --daemon kicbase/echo-server:functional-641225 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 image save kicbase/echo-server:functional-641225 /home/jenkins/workspace/KVM_Linux_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 image rm kicbase/echo-server:functional-641225 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 image load /home/jenkins/workspace/KVM_Linux_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-641225
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-641225 image save --daemon kicbase/echo-server:functional-641225 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-641225
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.55s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-641225
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-641225
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-641225
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestGvisorAddon (204.82s)

                                                
                                                
=== RUN   TestGvisorAddon
=== PAUSE TestGvisorAddon

                                                
                                                

                                                
                                                
=== CONT  TestGvisorAddon
gvisor_addon_test.go:52: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-745680 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:52: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-745680 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (50.67838269s)
gvisor_addon_test.go:58: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-745680 cache add gcr.io/k8s-minikube/gvisor-addon:2
I0925 19:31:12.363624   13239 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0925 19:31:12.396142   13239 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0925 19:31:12.396178   13239 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0925 19:31:12.396239   13239 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0925 19:31:12.396272   13239 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1241106359/002/docker-machine-driver-kvm2
I0925 19:31:12.561925   13239 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate1241106359/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x466e640 0x466e640 0x466e640 0x466e640 0x466e640 0x466e640 0x466e640] Decompressors:map[bz2:0xc0004abb10 gz:0xc0004abb18 tar:0xc0004abac0 tar.bz2:0xc0004abad0 tar.gz:0xc0004abae0 tar.xz:0xc0004abaf0 tar.zst:0xc0004abb00 tbz2:0xc0004abad0 tgz:0xc0004abae0 txz:0xc0004abaf0 tzst:0xc0004abb00 xz:0xc0004abb20 zip:0xc0004abb30 zst:0xc0004abb28] Getters:map[file:0xc00221ad40 http:0xc000175540 https:0xc000175590] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0925 19:31:12.561977   13239 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1241106359/002/docker-machine-driver-kvm2
gvisor_addon_test.go:58: (dbg) Done: out/minikube-linux-amd64 -p gvisor-745680 cache add gcr.io/k8s-minikube/gvisor-addon:2: (22.337890579s)
gvisor_addon_test.go:63: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-745680 addons enable gvisor
gvisor_addon_test.go:63: (dbg) Done: out/minikube-linux-amd64 -p gvisor-745680 addons enable gvisor: (2.621069139s)
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [c0956614-331c-432e-a09e-07f79c2dfd0f] Running
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 6.004720177s
gvisor_addon_test.go:73: (dbg) Run:  kubectl --context gvisor-745680 replace --force -f testdata/nginx-gvisor.yaml
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [d81ffe9b-f0d1-4f95-bd3d-d3ed29d6991f] Pending
helpers_test.go:344: "nginx-gvisor" [d81ffe9b-f0d1-4f95-bd3d-d3ed29d6991f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
E0925 19:32:13.771823   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/functional-641225/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "nginx-gvisor" [d81ffe9b-f0d1-4f95-bd3d-d3ed29d6991f] Running
E0925 19:32:31.947940   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/skaffold-816010/client.crt: no such file or directory" logger="UnhandledError"
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 54.005301623s
gvisor_addon_test.go:83: (dbg) Run:  out/minikube-linux-amd64 stop -p gvisor-745680
gvisor_addon_test.go:83: (dbg) Done: out/minikube-linux-amd64 stop -p gvisor-745680: (2.436446765s)
gvisor_addon_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-745680 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
E0925 19:32:42.189314   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/skaffold-816010/client.crt: no such file or directory" logger="UnhandledError"
gvisor_addon_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-745680 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (54.364292256s)
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [c0956614-331c-432e-a09e-07f79c2dfd0f] Running
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 6.005010196s
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [d81ffe9b-f0d1-4f95-bd3d-d3ed29d6991f] Running / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
E0925 19:33:43.633064   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/skaffold-816010/client.crt: no such file or directory" logger="UnhandledError"
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 5.004488375s
helpers_test.go:175: Cleaning up "gvisor-745680" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p gvisor-745680
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p gvisor-745680: (1.158176867s)
--- PASS: TestGvisorAddon (204.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (223.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-351458 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2 
E0925 18:48:22.760202   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/addons-608075/client.crt: no such file or directory" logger="UnhandledError"
E0925 18:48:22.766576   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/addons-608075/client.crt: no such file or directory" logger="UnhandledError"
E0925 18:48:22.777992   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/addons-608075/client.crt: no such file or directory" logger="UnhandledError"
E0925 18:48:22.799403   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/addons-608075/client.crt: no such file or directory" logger="UnhandledError"
E0925 18:48:22.840766   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/addons-608075/client.crt: no such file or directory" logger="UnhandledError"
E0925 18:48:22.922237   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/addons-608075/client.crt: no such file or directory" logger="UnhandledError"
E0925 18:48:23.083813   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/addons-608075/client.crt: no such file or directory" logger="UnhandledError"
E0925 18:48:23.405542   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/addons-608075/client.crt: no such file or directory" logger="UnhandledError"
E0925 18:48:24.047758   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/addons-608075/client.crt: no such file or directory" logger="UnhandledError"
E0925 18:48:25.329681   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/addons-608075/client.crt: no such file or directory" logger="UnhandledError"
E0925 18:48:27.891760   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/addons-608075/client.crt: no such file or directory" logger="UnhandledError"
E0925 18:48:33.013828   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/addons-608075/client.crt: no such file or directory" logger="UnhandledError"
E0925 18:48:43.255533   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/addons-608075/client.crt: no such file or directory" logger="UnhandledError"
E0925 18:49:03.737383   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/addons-608075/client.crt: no such file or directory" logger="UnhandledError"
E0925 18:49:44.699273   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/addons-608075/client.crt: no such file or directory" logger="UnhandledError"
E0925 18:51:06.621791   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/addons-608075/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-351458 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2 : (3m42.629743597s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-351458 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (223.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (36.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-351458 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-351458 -- rollout status deployment/busybox
E0925 18:52:13.770796   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/functional-641225/client.crt: no such file or directory" logger="UnhandledError"
E0925 18:52:13.777180   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/functional-641225/client.crt: no such file or directory" logger="UnhandledError"
E0925 18:52:13.788561   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/functional-641225/client.crt: no such file or directory" logger="UnhandledError"
E0925 18:52:13.809953   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/functional-641225/client.crt: no such file or directory" logger="UnhandledError"
E0925 18:52:13.851369   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/functional-641225/client.crt: no such file or directory" logger="UnhandledError"
E0925 18:52:13.932809   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/functional-641225/client.crt: no such file or directory" logger="UnhandledError"
E0925 18:52:14.094314   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/functional-641225/client.crt: no such file or directory" logger="UnhandledError"
E0925 18:52:14.415979   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/functional-641225/client.crt: no such file or directory" logger="UnhandledError"
E0925 18:52:15.057781   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/functional-641225/client.crt: no such file or directory" logger="UnhandledError"
E0925 18:52:16.339376   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/functional-641225/client.crt: no such file or directory" logger="UnhandledError"
E0925 18:52:18.901430   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/functional-641225/client.crt: no such file or directory" logger="UnhandledError"
E0925 18:52:24.023319   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/functional-641225/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-351458 -- rollout status deployment/busybox: (34.294252888s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-351458 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-351458 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-351458 -- exec busybox-7dff88458-5kpzw -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-351458 -- exec busybox-7dff88458-gztnw -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-351458 -- exec busybox-7dff88458-w7gjp -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-351458 -- exec busybox-7dff88458-5kpzw -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-351458 -- exec busybox-7dff88458-gztnw -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-351458 -- exec busybox-7dff88458-w7gjp -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-351458 -- exec busybox-7dff88458-5kpzw -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-351458 -- exec busybox-7dff88458-gztnw -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-351458 -- exec busybox-7dff88458-w7gjp -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (36.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-351458 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-351458 -- exec busybox-7dff88458-5kpzw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-351458 -- exec busybox-7dff88458-5kpzw -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-351458 -- exec busybox-7dff88458-gztnw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-351458 -- exec busybox-7dff88458-gztnw -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-351458 -- exec busybox-7dff88458-w7gjp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-351458 -- exec busybox-7dff88458-w7gjp -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (61.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-351458 -v=7 --alsologtostderr
E0925 18:52:34.264674   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/functional-641225/client.crt: no such file or directory" logger="UnhandledError"
E0925 18:52:54.746919   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/functional-641225/client.crt: no such file or directory" logger="UnhandledError"
E0925 18:53:22.760233   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/addons-608075/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-351458 -v=7 --alsologtostderr: (1m0.976113665s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-351458 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (61.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-351458 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-351458 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-351458 cp testdata/cp-test.txt ha-351458:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-351458 ssh -n ha-351458 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-351458 cp ha-351458:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2509074865/001/cp-test_ha-351458.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-351458 ssh -n ha-351458 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-351458 cp ha-351458:/home/docker/cp-test.txt ha-351458-m02:/home/docker/cp-test_ha-351458_ha-351458-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-351458 ssh -n ha-351458 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-351458 ssh -n ha-351458-m02 "sudo cat /home/docker/cp-test_ha-351458_ha-351458-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-351458 cp ha-351458:/home/docker/cp-test.txt ha-351458-m03:/home/docker/cp-test_ha-351458_ha-351458-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-351458 ssh -n ha-351458 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-351458 ssh -n ha-351458-m03 "sudo cat /home/docker/cp-test_ha-351458_ha-351458-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-351458 cp ha-351458:/home/docker/cp-test.txt ha-351458-m04:/home/docker/cp-test_ha-351458_ha-351458-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-351458 ssh -n ha-351458 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-351458 ssh -n ha-351458-m04 "sudo cat /home/docker/cp-test_ha-351458_ha-351458-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-351458 cp testdata/cp-test.txt ha-351458-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-351458 ssh -n ha-351458-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-351458 cp ha-351458-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2509074865/001/cp-test_ha-351458-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-351458 ssh -n ha-351458-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-351458 cp ha-351458-m02:/home/docker/cp-test.txt ha-351458:/home/docker/cp-test_ha-351458-m02_ha-351458.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-351458 ssh -n ha-351458-m02 "sudo cat /home/docker/cp-test.txt"
E0925 18:53:35.708695   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/functional-641225/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-351458 ssh -n ha-351458 "sudo cat /home/docker/cp-test_ha-351458-m02_ha-351458.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-351458 cp ha-351458-m02:/home/docker/cp-test.txt ha-351458-m03:/home/docker/cp-test_ha-351458-m02_ha-351458-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-351458 ssh -n ha-351458-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-351458 ssh -n ha-351458-m03 "sudo cat /home/docker/cp-test_ha-351458-m02_ha-351458-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-351458 cp ha-351458-m02:/home/docker/cp-test.txt ha-351458-m04:/home/docker/cp-test_ha-351458-m02_ha-351458-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-351458 ssh -n ha-351458-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-351458 ssh -n ha-351458-m04 "sudo cat /home/docker/cp-test_ha-351458-m02_ha-351458-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-351458 cp testdata/cp-test.txt ha-351458-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-351458 ssh -n ha-351458-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-351458 cp ha-351458-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2509074865/001/cp-test_ha-351458-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-351458 ssh -n ha-351458-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-351458 cp ha-351458-m03:/home/docker/cp-test.txt ha-351458:/home/docker/cp-test_ha-351458-m03_ha-351458.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-351458 ssh -n ha-351458-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-351458 ssh -n ha-351458 "sudo cat /home/docker/cp-test_ha-351458-m03_ha-351458.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-351458 cp ha-351458-m03:/home/docker/cp-test.txt ha-351458-m02:/home/docker/cp-test_ha-351458-m03_ha-351458-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-351458 ssh -n ha-351458-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-351458 ssh -n ha-351458-m02 "sudo cat /home/docker/cp-test_ha-351458-m03_ha-351458-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-351458 cp ha-351458-m03:/home/docker/cp-test.txt ha-351458-m04:/home/docker/cp-test_ha-351458-m03_ha-351458-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-351458 ssh -n ha-351458-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-351458 ssh -n ha-351458-m04 "sudo cat /home/docker/cp-test_ha-351458-m03_ha-351458-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-351458 cp testdata/cp-test.txt ha-351458-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-351458 ssh -n ha-351458-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-351458 cp ha-351458-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2509074865/001/cp-test_ha-351458-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-351458 ssh -n ha-351458-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-351458 cp ha-351458-m04:/home/docker/cp-test.txt ha-351458:/home/docker/cp-test_ha-351458-m04_ha-351458.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-351458 ssh -n ha-351458-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-351458 ssh -n ha-351458 "sudo cat /home/docker/cp-test_ha-351458-m04_ha-351458.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-351458 cp ha-351458-m04:/home/docker/cp-test.txt ha-351458-m02:/home/docker/cp-test_ha-351458-m04_ha-351458-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-351458 ssh -n ha-351458-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-351458 ssh -n ha-351458-m02 "sudo cat /home/docker/cp-test_ha-351458-m04_ha-351458-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-351458 cp ha-351458-m04:/home/docker/cp-test.txt ha-351458-m03:/home/docker/cp-test_ha-351458-m04_ha-351458-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-351458 ssh -n ha-351458-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-351458 ssh -n ha-351458-m03 "sudo cat /home/docker/cp-test_ha-351458-m04_ha-351458-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-351458 node stop m02 -v=7 --alsologtostderr
E0925 18:53:50.463893   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/addons-608075/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-351458 node stop m02 -v=7 --alsologtostderr: (13.302936017s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-351458 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-351458 status -v=7 --alsologtostderr: exit status 7 (640.411934ms)

                                                
                                                
-- stdout --
	ha-351458
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-351458-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-351458-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-351458-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 18:53:56.919001   28345 out.go:345] Setting OutFile to fd 1 ...
	I0925 18:53:56.919144   28345 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 18:53:56.919155   28345 out.go:358] Setting ErrFile to fd 2...
	I0925 18:53:56.919162   28345 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 18:53:56.919395   28345 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19681-6065/.minikube/bin
	I0925 18:53:56.919584   28345 out.go:352] Setting JSON to false
	I0925 18:53:56.919615   28345 mustload.go:65] Loading cluster: ha-351458
	I0925 18:53:56.919678   28345 notify.go:220] Checking for updates...
	I0925 18:53:56.920130   28345 config.go:182] Loaded profile config "ha-351458": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0925 18:53:56.920155   28345 status.go:174] checking status of ha-351458 ...
	I0925 18:53:56.920830   28345 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 18:53:56.920906   28345 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 18:53:56.938810   28345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43709
	I0925 18:53:56.939457   28345 main.go:141] libmachine: () Calling .GetVersion
	I0925 18:53:56.940278   28345 main.go:141] libmachine: Using API Version  1
	I0925 18:53:56.940302   28345 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 18:53:56.940745   28345 main.go:141] libmachine: () Calling .GetMachineName
	I0925 18:53:56.940978   28345 main.go:141] libmachine: (ha-351458) Calling .GetState
	I0925 18:53:56.942950   28345 status.go:364] ha-351458 host status = "Running" (err=<nil>)
	I0925 18:53:56.942971   28345 host.go:66] Checking if "ha-351458" exists ...
	I0925 18:53:56.943401   28345 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 18:53:56.943447   28345 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 18:53:56.960528   28345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33329
	I0925 18:53:56.961004   28345 main.go:141] libmachine: () Calling .GetVersion
	I0925 18:53:56.961499   28345 main.go:141] libmachine: Using API Version  1
	I0925 18:53:56.961516   28345 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 18:53:56.961912   28345 main.go:141] libmachine: () Calling .GetMachineName
	I0925 18:53:56.962137   28345 main.go:141] libmachine: (ha-351458) Calling .GetIP
	I0925 18:53:56.965437   28345 main.go:141] libmachine: (ha-351458) DBG | domain ha-351458 has defined MAC address 52:54:00:48:06:08 in network mk-ha-351458
	I0925 18:53:56.965875   28345 main.go:141] libmachine: (ha-351458) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:06:08", ip: ""} in network mk-ha-351458: {Iface:virbr1 ExpiryTime:2024-09-25 19:48:21 +0000 UTC Type:0 Mac:52:54:00:48:06:08 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-351458 Clientid:01:52:54:00:48:06:08}
	I0925 18:53:56.965902   28345 main.go:141] libmachine: (ha-351458) DBG | domain ha-351458 has defined IP address 192.168.39.114 and MAC address 52:54:00:48:06:08 in network mk-ha-351458
	I0925 18:53:56.966087   28345 host.go:66] Checking if "ha-351458" exists ...
	I0925 18:53:56.966379   28345 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 18:53:56.966424   28345 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 18:53:56.981851   28345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46159
	I0925 18:53:56.982299   28345 main.go:141] libmachine: () Calling .GetVersion
	I0925 18:53:56.982837   28345 main.go:141] libmachine: Using API Version  1
	I0925 18:53:56.982861   28345 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 18:53:56.983203   28345 main.go:141] libmachine: () Calling .GetMachineName
	I0925 18:53:56.983435   28345 main.go:141] libmachine: (ha-351458) Calling .DriverName
	I0925 18:53:56.983615   28345 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0925 18:53:56.983643   28345 main.go:141] libmachine: (ha-351458) Calling .GetSSHHostname
	I0925 18:53:56.986882   28345 main.go:141] libmachine: (ha-351458) DBG | domain ha-351458 has defined MAC address 52:54:00:48:06:08 in network mk-ha-351458
	I0925 18:53:56.987398   28345 main.go:141] libmachine: (ha-351458) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:06:08", ip: ""} in network mk-ha-351458: {Iface:virbr1 ExpiryTime:2024-09-25 19:48:21 +0000 UTC Type:0 Mac:52:54:00:48:06:08 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-351458 Clientid:01:52:54:00:48:06:08}
	I0925 18:53:56.987452   28345 main.go:141] libmachine: (ha-351458) DBG | domain ha-351458 has defined IP address 192.168.39.114 and MAC address 52:54:00:48:06:08 in network mk-ha-351458
	I0925 18:53:56.987631   28345 main.go:141] libmachine: (ha-351458) Calling .GetSSHPort
	I0925 18:53:56.987833   28345 main.go:141] libmachine: (ha-351458) Calling .GetSSHKeyPath
	I0925 18:53:56.987979   28345 main.go:141] libmachine: (ha-351458) Calling .GetSSHUsername
	I0925 18:53:56.988117   28345 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19681-6065/.minikube/machines/ha-351458/id_rsa Username:docker}
	I0925 18:53:57.070525   28345 ssh_runner.go:195] Run: systemctl --version
	I0925 18:53:57.078047   28345 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0925 18:53:57.095512   28345 kubeconfig.go:125] found "ha-351458" server: "https://192.168.39.254:8443"
	I0925 18:53:57.095543   28345 api_server.go:166] Checking apiserver status ...
	I0925 18:53:57.095575   28345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 18:53:57.115287   28345 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1935/cgroup
	W0925 18:53:57.129523   28345 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1935/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0925 18:53:57.129632   28345 ssh_runner.go:195] Run: ls
	I0925 18:53:57.135233   28345 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0925 18:53:57.140431   28345 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0925 18:53:57.140458   28345 status.go:456] ha-351458 apiserver status = Running (err=<nil>)
	I0925 18:53:57.140473   28345 status.go:176] ha-351458 status: &{Name:ha-351458 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0925 18:53:57.140496   28345 status.go:174] checking status of ha-351458-m02 ...
	I0925 18:53:57.140822   28345 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 18:53:57.140862   28345 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 18:53:57.155771   28345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42297
	I0925 18:53:57.156258   28345 main.go:141] libmachine: () Calling .GetVersion
	I0925 18:53:57.156831   28345 main.go:141] libmachine: Using API Version  1
	I0925 18:53:57.156861   28345 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 18:53:57.157261   28345 main.go:141] libmachine: () Calling .GetMachineName
	I0925 18:53:57.157506   28345 main.go:141] libmachine: (ha-351458-m02) Calling .GetState
	I0925 18:53:57.159109   28345 status.go:364] ha-351458-m02 host status = "Stopped" (err=<nil>)
	I0925 18:53:57.159125   28345 status.go:377] host is not running, skipping remaining checks
	I0925 18:53:57.159130   28345 status.go:176] ha-351458-m02 status: &{Name:ha-351458-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0925 18:53:57.159151   28345 status.go:174] checking status of ha-351458-m03 ...
	I0925 18:53:57.159583   28345 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 18:53:57.159630   28345 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 18:53:57.174912   28345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45937
	I0925 18:53:57.175439   28345 main.go:141] libmachine: () Calling .GetVersion
	I0925 18:53:57.175930   28345 main.go:141] libmachine: Using API Version  1
	I0925 18:53:57.175950   28345 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 18:53:57.176303   28345 main.go:141] libmachine: () Calling .GetMachineName
	I0925 18:53:57.176498   28345 main.go:141] libmachine: (ha-351458-m03) Calling .GetState
	I0925 18:53:57.178386   28345 status.go:364] ha-351458-m03 host status = "Running" (err=<nil>)
	I0925 18:53:57.178404   28345 host.go:66] Checking if "ha-351458-m03" exists ...
	I0925 18:53:57.178688   28345 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 18:53:57.178721   28345 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 18:53:57.194162   28345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43491
	I0925 18:53:57.194532   28345 main.go:141] libmachine: () Calling .GetVersion
	I0925 18:53:57.194977   28345 main.go:141] libmachine: Using API Version  1
	I0925 18:53:57.195000   28345 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 18:53:57.195359   28345 main.go:141] libmachine: () Calling .GetMachineName
	I0925 18:53:57.195564   28345 main.go:141] libmachine: (ha-351458-m03) Calling .GetIP
	I0925 18:53:57.198119   28345 main.go:141] libmachine: (ha-351458-m03) DBG | domain ha-351458-m03 has defined MAC address 52:54:00:da:2f:8c in network mk-ha-351458
	I0925 18:53:57.198561   28345 main.go:141] libmachine: (ha-351458-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:2f:8c", ip: ""} in network mk-ha-351458: {Iface:virbr1 ExpiryTime:2024-09-25 19:50:38 +0000 UTC Type:0 Mac:52:54:00:da:2f:8c Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-351458-m03 Clientid:01:52:54:00:da:2f:8c}
	I0925 18:53:57.198584   28345 main.go:141] libmachine: (ha-351458-m03) DBG | domain ha-351458-m03 has defined IP address 192.168.39.83 and MAC address 52:54:00:da:2f:8c in network mk-ha-351458
	I0925 18:53:57.198714   28345 host.go:66] Checking if "ha-351458-m03" exists ...
	I0925 18:53:57.199000   28345 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 18:53:57.199035   28345 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 18:53:57.214051   28345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36855
	I0925 18:53:57.214520   28345 main.go:141] libmachine: () Calling .GetVersion
	I0925 18:53:57.215015   28345 main.go:141] libmachine: Using API Version  1
	I0925 18:53:57.215035   28345 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 18:53:57.215309   28345 main.go:141] libmachine: () Calling .GetMachineName
	I0925 18:53:57.215498   28345 main.go:141] libmachine: (ha-351458-m03) Calling .DriverName
	I0925 18:53:57.215665   28345 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0925 18:53:57.215687   28345 main.go:141] libmachine: (ha-351458-m03) Calling .GetSSHHostname
	I0925 18:53:57.218418   28345 main.go:141] libmachine: (ha-351458-m03) DBG | domain ha-351458-m03 has defined MAC address 52:54:00:da:2f:8c in network mk-ha-351458
	I0925 18:53:57.218805   28345 main.go:141] libmachine: (ha-351458-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:2f:8c", ip: ""} in network mk-ha-351458: {Iface:virbr1 ExpiryTime:2024-09-25 19:50:38 +0000 UTC Type:0 Mac:52:54:00:da:2f:8c Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-351458-m03 Clientid:01:52:54:00:da:2f:8c}
	I0925 18:53:57.218826   28345 main.go:141] libmachine: (ha-351458-m03) DBG | domain ha-351458-m03 has defined IP address 192.168.39.83 and MAC address 52:54:00:da:2f:8c in network mk-ha-351458
	I0925 18:53:57.219001   28345 main.go:141] libmachine: (ha-351458-m03) Calling .GetSSHPort
	I0925 18:53:57.219170   28345 main.go:141] libmachine: (ha-351458-m03) Calling .GetSSHKeyPath
	I0925 18:53:57.219293   28345 main.go:141] libmachine: (ha-351458-m03) Calling .GetSSHUsername
	I0925 18:53:57.219444   28345 sshutil.go:53] new ssh client: &{IP:192.168.39.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19681-6065/.minikube/machines/ha-351458-m03/id_rsa Username:docker}
	I0925 18:53:57.302138   28345 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0925 18:53:57.320879   28345 kubeconfig.go:125] found "ha-351458" server: "https://192.168.39.254:8443"
	I0925 18:53:57.320911   28345 api_server.go:166] Checking apiserver status ...
	I0925 18:53:57.320947   28345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 18:53:57.338203   28345 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1824/cgroup
	W0925 18:53:57.351461   28345 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1824/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0925 18:53:57.351518   28345 ssh_runner.go:195] Run: ls
	I0925 18:53:57.356902   28345 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0925 18:53:57.362874   28345 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0925 18:53:57.362902   28345 status.go:456] ha-351458-m03 apiserver status = Running (err=<nil>)
	I0925 18:53:57.362913   28345 status.go:176] ha-351458-m03 status: &{Name:ha-351458-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0925 18:53:57.362930   28345 status.go:174] checking status of ha-351458-m04 ...
	I0925 18:53:57.363321   28345 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 18:53:57.363367   28345 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 18:53:57.378625   28345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44501
	I0925 18:53:57.379084   28345 main.go:141] libmachine: () Calling .GetVersion
	I0925 18:53:57.379589   28345 main.go:141] libmachine: Using API Version  1
	I0925 18:53:57.379608   28345 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 18:53:57.379920   28345 main.go:141] libmachine: () Calling .GetMachineName
	I0925 18:53:57.380100   28345 main.go:141] libmachine: (ha-351458-m04) Calling .GetState
	I0925 18:53:57.381777   28345 status.go:364] ha-351458-m04 host status = "Running" (err=<nil>)
	I0925 18:53:57.381791   28345 host.go:66] Checking if "ha-351458-m04" exists ...
	I0925 18:53:57.382167   28345 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 18:53:57.382216   28345 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 18:53:57.397498   28345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46575
	I0925 18:53:57.397876   28345 main.go:141] libmachine: () Calling .GetVersion
	I0925 18:53:57.398351   28345 main.go:141] libmachine: Using API Version  1
	I0925 18:53:57.398376   28345 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 18:53:57.398644   28345 main.go:141] libmachine: () Calling .GetMachineName
	I0925 18:53:57.398829   28345 main.go:141] libmachine: (ha-351458-m04) Calling .GetIP
	I0925 18:53:57.401455   28345 main.go:141] libmachine: (ha-351458-m04) DBG | domain ha-351458-m04 has defined MAC address 52:54:00:8c:17:8b in network mk-ha-351458
	I0925 18:53:57.401887   28345 main.go:141] libmachine: (ha-351458-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:17:8b", ip: ""} in network mk-ha-351458: {Iface:virbr1 ExpiryTime:2024-09-25 19:52:43 +0000 UTC Type:0 Mac:52:54:00:8c:17:8b Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-351458-m04 Clientid:01:52:54:00:8c:17:8b}
	I0925 18:53:57.401915   28345 main.go:141] libmachine: (ha-351458-m04) DBG | domain ha-351458-m04 has defined IP address 192.168.39.76 and MAC address 52:54:00:8c:17:8b in network mk-ha-351458
	I0925 18:53:57.402038   28345 host.go:66] Checking if "ha-351458-m04" exists ...
	I0925 18:53:57.402330   28345 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 18:53:57.402363   28345 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 18:53:57.416927   28345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37575
	I0925 18:53:57.417434   28345 main.go:141] libmachine: () Calling .GetVersion
	I0925 18:53:57.417897   28345 main.go:141] libmachine: Using API Version  1
	I0925 18:53:57.417919   28345 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 18:53:57.418225   28345 main.go:141] libmachine: () Calling .GetMachineName
	I0925 18:53:57.418391   28345 main.go:141] libmachine: (ha-351458-m04) Calling .DriverName
	I0925 18:53:57.418579   28345 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0925 18:53:57.418612   28345 main.go:141] libmachine: (ha-351458-m04) Calling .GetSSHHostname
	I0925 18:53:57.421615   28345 main.go:141] libmachine: (ha-351458-m04) DBG | domain ha-351458-m04 has defined MAC address 52:54:00:8c:17:8b in network mk-ha-351458
	I0925 18:53:57.422062   28345 main.go:141] libmachine: (ha-351458-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:17:8b", ip: ""} in network mk-ha-351458: {Iface:virbr1 ExpiryTime:2024-09-25 19:52:43 +0000 UTC Type:0 Mac:52:54:00:8c:17:8b Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-351458-m04 Clientid:01:52:54:00:8c:17:8b}
	I0925 18:53:57.422096   28345 main.go:141] libmachine: (ha-351458-m04) DBG | domain ha-351458-m04 has defined IP address 192.168.39.76 and MAC address 52:54:00:8c:17:8b in network mk-ha-351458
	I0925 18:53:57.422291   28345 main.go:141] libmachine: (ha-351458-m04) Calling .GetSSHPort
	I0925 18:53:57.422474   28345 main.go:141] libmachine: (ha-351458-m04) Calling .GetSSHKeyPath
	I0925 18:53:57.422629   28345 main.go:141] libmachine: (ha-351458-m04) Calling .GetSSHUsername
	I0925 18:53:57.422804   28345 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19681-6065/.minikube/machines/ha-351458-m04/id_rsa Username:docker}
	I0925 18:53:57.501311   28345 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0925 18:53:57.516770   28345 status.go:176] ha-351458-m04 status: &{Name:ha-351458-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (43.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-351458 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p ha-351458 node start m02 -v=7 --alsologtostderr: (42.195876564s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-351458 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (43.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (227.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-351458 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-351458 -v=7 --alsologtostderr
E0925 18:54:57.630552   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/functional-641225/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-amd64 stop -p ha-351458 -v=7 --alsologtostderr: (42.323854104s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-351458 --wait=true -v=7 --alsologtostderr
E0925 18:57:13.771162   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/functional-641225/client.crt: no such file or directory" logger="UnhandledError"
E0925 18:57:41.471932   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/functional-641225/client.crt: no such file or directory" logger="UnhandledError"
E0925 18:58:22.760196   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/addons-608075/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-351458 --wait=true -v=7 --alsologtostderr: (3m5.211561259s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-351458
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (227.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (7.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-351458 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-351458 node delete m03 -v=7 --alsologtostderr: (6.635044357s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-351458 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (7.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (39.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-351458 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-amd64 -p ha-351458 stop -v=7 --alsologtostderr: (38.930472198s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-351458 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-351458 status -v=7 --alsologtostderr: exit status 7 (100.781064ms)

                                                
                                                
-- stdout --
	ha-351458
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-351458-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-351458-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 18:59:16.884955   30779 out.go:345] Setting OutFile to fd 1 ...
	I0925 18:59:16.885067   30779 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 18:59:16.885075   30779 out.go:358] Setting ErrFile to fd 2...
	I0925 18:59:16.885079   30779 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 18:59:16.885254   30779 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19681-6065/.minikube/bin
	I0925 18:59:16.885417   30779 out.go:352] Setting JSON to false
	I0925 18:59:16.885441   30779 mustload.go:65] Loading cluster: ha-351458
	I0925 18:59:16.885499   30779 notify.go:220] Checking for updates...
	I0925 18:59:16.885924   30779 config.go:182] Loaded profile config "ha-351458": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0925 18:59:16.885948   30779 status.go:174] checking status of ha-351458 ...
	I0925 18:59:16.886388   30779 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 18:59:16.886461   30779 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 18:59:16.904273   30779 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46671
	I0925 18:59:16.904715   30779 main.go:141] libmachine: () Calling .GetVersion
	I0925 18:59:16.905380   30779 main.go:141] libmachine: Using API Version  1
	I0925 18:59:16.905405   30779 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 18:59:16.905749   30779 main.go:141] libmachine: () Calling .GetMachineName
	I0925 18:59:16.905942   30779 main.go:141] libmachine: (ha-351458) Calling .GetState
	I0925 18:59:16.907571   30779 status.go:364] ha-351458 host status = "Stopped" (err=<nil>)
	I0925 18:59:16.907587   30779 status.go:377] host is not running, skipping remaining checks
	I0925 18:59:16.907594   30779 status.go:176] ha-351458 status: &{Name:ha-351458 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0925 18:59:16.907618   30779 status.go:174] checking status of ha-351458-m02 ...
	I0925 18:59:16.907900   30779 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 18:59:16.907942   30779 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 18:59:16.922704   30779 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36805
	I0925 18:59:16.923156   30779 main.go:141] libmachine: () Calling .GetVersion
	I0925 18:59:16.923574   30779 main.go:141] libmachine: Using API Version  1
	I0925 18:59:16.923593   30779 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 18:59:16.923917   30779 main.go:141] libmachine: () Calling .GetMachineName
	I0925 18:59:16.924078   30779 main.go:141] libmachine: (ha-351458-m02) Calling .GetState
	I0925 18:59:16.925804   30779 status.go:364] ha-351458-m02 host status = "Stopped" (err=<nil>)
	I0925 18:59:16.925822   30779 status.go:377] host is not running, skipping remaining checks
	I0925 18:59:16.925828   30779 status.go:176] ha-351458-m02 status: &{Name:ha-351458-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0925 18:59:16.925864   30779 status.go:174] checking status of ha-351458-m04 ...
	I0925 18:59:16.926148   30779 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 18:59:16.926181   30779 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 18:59:16.940907   30779 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45329
	I0925 18:59:16.941347   30779 main.go:141] libmachine: () Calling .GetVersion
	I0925 18:59:16.941843   30779 main.go:141] libmachine: Using API Version  1
	I0925 18:59:16.941863   30779 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 18:59:16.942243   30779 main.go:141] libmachine: () Calling .GetMachineName
	I0925 18:59:16.942440   30779 main.go:141] libmachine: (ha-351458-m04) Calling .GetState
	I0925 18:59:16.943912   30779 status.go:364] ha-351458-m04 host status = "Stopped" (err=<nil>)
	I0925 18:59:16.943932   30779 status.go:377] host is not running, skipping remaining checks
	I0925 18:59:16.943939   30779 status.go:176] ha-351458-m04 status: &{Name:ha-351458-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (39.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (158.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-351458 --wait=true -v=7 --alsologtostderr --driver=kvm2 
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-351458 --wait=true -v=7 --alsologtostderr --driver=kvm2 : (2m37.763374192s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-351458 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (158.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (84.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-351458 --control-plane -v=7 --alsologtostderr
E0925 19:02:13.770928   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/functional-641225/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-351458 --control-plane -v=7 --alsologtostderr: (1m23.763126502s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-351458 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (84.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.92s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (50.91s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-185401 --driver=kvm2 
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-185401 --driver=kvm2 : (50.912428281s)
--- PASS: TestImageBuild/serial/Setup (50.91s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.49s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-185401
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-185401: (1.491612246s)
--- PASS: TestImageBuild/serial/NormalBuild (1.49s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.99s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-185401
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.99s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.81s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-185401
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.81s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.86s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-185401
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.86s)

                                                
                                    
x
+
TestJSONOutput/start/Command (93.12s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-903836 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 
E0925 19:04:45.825699   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/addons-608075/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-903836 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 : (1m33.122933429s)
--- PASS: TestJSONOutput/start/Command (93.12s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.59s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-903836 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.59s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.52s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-903836 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.52s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (13.31s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-903836 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-903836 --output=json --user=testUser: (13.313287001s)
--- PASS: TestJSONOutput/stop/Command (13.31s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-119299 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-119299 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (64.061877ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"3f3a5249-d689-4502-b6d2-4bef115a0834","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-119299] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"474e637e-d864-4a74-a4e0-6ddecd5edc62","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19681"}}
	{"specversion":"1.0","id":"32af1692-4042-4e09-a410-93cb3f12ab64","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e09d2108-79a3-4f7a-aced-81342887e88d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19681-6065/kubeconfig"}}
	{"specversion":"1.0","id":"df76a696-a347-4465-9954-9ca9230d51f5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19681-6065/.minikube"}}
	{"specversion":"1.0","id":"94b4f9e1-3d60-4d28-86c7-242f5751bb81","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"78fbc667-35dd-4427-b750-8584ad68e8ab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c9c890ed-f2b5-464c-a208-6f096b26d3af","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-119299" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-119299
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (102.46s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-111385 --driver=kvm2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-111385 --driver=kvm2 : (50.201151791s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-122372 --driver=kvm2 
E0925 19:07:13.771569   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/functional-641225/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-122372 --driver=kvm2 : (49.152933425s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-111385
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-122372
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-122372" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-122372
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-122372: (1.018198761s)
helpers_test.go:175: Cleaning up "first-111385" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-111385
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-111385: (1.012034579s)
--- PASS: TestMinikubeProfile (102.46s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (28.92s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-149587 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-149587 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 : (27.922372881s)
--- PASS: TestMountStart/serial/StartWithMountFirst (28.92s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-149587 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-149587 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (30.87s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-165643 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 
E0925 19:08:22.761052   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/addons-608075/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:08:36.836088   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/functional-641225/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-165643 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 : (29.868909651s)
--- PASS: TestMountStart/serial/StartWithMountSecond (30.87s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.49s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-165643 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-165643 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.49s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.74s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-149587 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.74s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-165643 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-165643 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-165643
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-165643: (2.281926507s)
--- PASS: TestMountStart/serial/Stop (2.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (25.69s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-165643
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-165643: (24.694154274s)
--- PASS: TestMountStart/serial/RestartStopped (25.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-165643 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-165643 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (132.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-483638 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-483638 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 : (2m12.482686099s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-483638 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (132.89s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-483638 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-483638 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-483638 -- rollout status deployment/busybox: (1.936831622s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-483638 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-483638 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-483638 -- exec busybox-7dff88458-8kwtk -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-483638 -- exec busybox-7dff88458-v767q -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-483638 -- exec busybox-7dff88458-8kwtk -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-483638 -- exec busybox-7dff88458-v767q -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-483638 -- exec busybox-7dff88458-8kwtk -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-483638 -- exec busybox-7dff88458-v767q -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.49s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-483638 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-483638 -- exec busybox-7dff88458-8kwtk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-483638 -- exec busybox-7dff88458-8kwtk -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-483638 -- exec busybox-7dff88458-v767q -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-483638 -- exec busybox-7dff88458-v767q -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.85s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (55.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-483638 -v 3 --alsologtostderr
E0925 19:12:13.771269   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/functional-641225/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-483638 -v 3 --alsologtostderr: (54.833038458s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-483638 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (55.40s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-483638 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.58s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-483638 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-483638 cp testdata/cp-test.txt multinode-483638:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-483638 ssh -n multinode-483638 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-483638 cp multinode-483638:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1377183381/001/cp-test_multinode-483638.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-483638 ssh -n multinode-483638 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-483638 cp multinode-483638:/home/docker/cp-test.txt multinode-483638-m02:/home/docker/cp-test_multinode-483638_multinode-483638-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-483638 ssh -n multinode-483638 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-483638 ssh -n multinode-483638-m02 "sudo cat /home/docker/cp-test_multinode-483638_multinode-483638-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-483638 cp multinode-483638:/home/docker/cp-test.txt multinode-483638-m03:/home/docker/cp-test_multinode-483638_multinode-483638-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-483638 ssh -n multinode-483638 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-483638 ssh -n multinode-483638-m03 "sudo cat /home/docker/cp-test_multinode-483638_multinode-483638-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-483638 cp testdata/cp-test.txt multinode-483638-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-483638 ssh -n multinode-483638-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-483638 cp multinode-483638-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1377183381/001/cp-test_multinode-483638-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-483638 ssh -n multinode-483638-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-483638 cp multinode-483638-m02:/home/docker/cp-test.txt multinode-483638:/home/docker/cp-test_multinode-483638-m02_multinode-483638.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-483638 ssh -n multinode-483638-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-483638 ssh -n multinode-483638 "sudo cat /home/docker/cp-test_multinode-483638-m02_multinode-483638.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-483638 cp multinode-483638-m02:/home/docker/cp-test.txt multinode-483638-m03:/home/docker/cp-test_multinode-483638-m02_multinode-483638-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-483638 ssh -n multinode-483638-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-483638 ssh -n multinode-483638-m03 "sudo cat /home/docker/cp-test_multinode-483638-m02_multinode-483638-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-483638 cp testdata/cp-test.txt multinode-483638-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-483638 ssh -n multinode-483638-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-483638 cp multinode-483638-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1377183381/001/cp-test_multinode-483638-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-483638 ssh -n multinode-483638-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-483638 cp multinode-483638-m03:/home/docker/cp-test.txt multinode-483638:/home/docker/cp-test_multinode-483638-m03_multinode-483638.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-483638 ssh -n multinode-483638-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-483638 ssh -n multinode-483638 "sudo cat /home/docker/cp-test_multinode-483638-m03_multinode-483638.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-483638 cp multinode-483638-m03:/home/docker/cp-test.txt multinode-483638-m02:/home/docker/cp-test_multinode-483638-m03_multinode-483638-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-483638 ssh -n multinode-483638-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-483638 ssh -n multinode-483638-m02 "sudo cat /home/docker/cp-test_multinode-483638-m03_multinode-483638-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.14s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-483638 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-483638 node stop m03: (2.58556459s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-483638 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-483638 status: exit status 7 (425.203723ms)

                                                
                                                
-- stdout --
	multinode-483638
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-483638-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-483638-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-483638 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-483638 status --alsologtostderr: exit status 7 (420.912297ms)

                                                
                                                
-- stdout --
	multinode-483638
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-483638-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-483638-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 19:12:46.456214   39391 out.go:345] Setting OutFile to fd 1 ...
	I0925 19:12:46.456382   39391 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 19:12:46.456394   39391 out.go:358] Setting ErrFile to fd 2...
	I0925 19:12:46.456401   39391 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 19:12:46.456705   39391 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19681-6065/.minikube/bin
	I0925 19:12:46.456940   39391 out.go:352] Setting JSON to false
	I0925 19:12:46.456978   39391 mustload.go:65] Loading cluster: multinode-483638
	I0925 19:12:46.457085   39391 notify.go:220] Checking for updates...
	I0925 19:12:46.457547   39391 config.go:182] Loaded profile config "multinode-483638": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0925 19:12:46.457587   39391 status.go:174] checking status of multinode-483638 ...
	I0925 19:12:46.458240   39391 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 19:12:46.458290   39391 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 19:12:46.476426   39391 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38311
	I0925 19:12:46.476910   39391 main.go:141] libmachine: () Calling .GetVersion
	I0925 19:12:46.477480   39391 main.go:141] libmachine: Using API Version  1
	I0925 19:12:46.477523   39391 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 19:12:46.477858   39391 main.go:141] libmachine: () Calling .GetMachineName
	I0925 19:12:46.478005   39391 main.go:141] libmachine: (multinode-483638) Calling .GetState
	I0925 19:12:46.479726   39391 status.go:364] multinode-483638 host status = "Running" (err=<nil>)
	I0925 19:12:46.479744   39391 host.go:66] Checking if "multinode-483638" exists ...
	I0925 19:12:46.480162   39391 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 19:12:46.480212   39391 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 19:12:46.496039   39391 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34609
	I0925 19:12:46.496456   39391 main.go:141] libmachine: () Calling .GetVersion
	I0925 19:12:46.496907   39391 main.go:141] libmachine: Using API Version  1
	I0925 19:12:46.496931   39391 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 19:12:46.497230   39391 main.go:141] libmachine: () Calling .GetMachineName
	I0925 19:12:46.497375   39391 main.go:141] libmachine: (multinode-483638) Calling .GetIP
	I0925 19:12:46.499938   39391 main.go:141] libmachine: (multinode-483638) DBG | domain multinode-483638 has defined MAC address 52:54:00:f4:43:5a in network mk-multinode-483638
	I0925 19:12:46.500327   39391 main.go:141] libmachine: (multinode-483638) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:43:5a", ip: ""} in network mk-multinode-483638: {Iface:virbr1 ExpiryTime:2024-09-25 20:09:37 +0000 UTC Type:0 Mac:52:54:00:f4:43:5a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:multinode-483638 Clientid:01:52:54:00:f4:43:5a}
	I0925 19:12:46.500353   39391 main.go:141] libmachine: (multinode-483638) DBG | domain multinode-483638 has defined IP address 192.168.39.227 and MAC address 52:54:00:f4:43:5a in network mk-multinode-483638
	I0925 19:12:46.500461   39391 host.go:66] Checking if "multinode-483638" exists ...
	I0925 19:12:46.500771   39391 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 19:12:46.500810   39391 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 19:12:46.516366   39391 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36113
	I0925 19:12:46.516834   39391 main.go:141] libmachine: () Calling .GetVersion
	I0925 19:12:46.517365   39391 main.go:141] libmachine: Using API Version  1
	I0925 19:12:46.517383   39391 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 19:12:46.517692   39391 main.go:141] libmachine: () Calling .GetMachineName
	I0925 19:12:46.517856   39391 main.go:141] libmachine: (multinode-483638) Calling .DriverName
	I0925 19:12:46.518063   39391 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0925 19:12:46.518097   39391 main.go:141] libmachine: (multinode-483638) Calling .GetSSHHostname
	I0925 19:12:46.520842   39391 main.go:141] libmachine: (multinode-483638) DBG | domain multinode-483638 has defined MAC address 52:54:00:f4:43:5a in network mk-multinode-483638
	I0925 19:12:46.521246   39391 main.go:141] libmachine: (multinode-483638) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:43:5a", ip: ""} in network mk-multinode-483638: {Iface:virbr1 ExpiryTime:2024-09-25 20:09:37 +0000 UTC Type:0 Mac:52:54:00:f4:43:5a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:multinode-483638 Clientid:01:52:54:00:f4:43:5a}
	I0925 19:12:46.521265   39391 main.go:141] libmachine: (multinode-483638) DBG | domain multinode-483638 has defined IP address 192.168.39.227 and MAC address 52:54:00:f4:43:5a in network mk-multinode-483638
	I0925 19:12:46.521393   39391 main.go:141] libmachine: (multinode-483638) Calling .GetSSHPort
	I0925 19:12:46.521554   39391 main.go:141] libmachine: (multinode-483638) Calling .GetSSHKeyPath
	I0925 19:12:46.521727   39391 main.go:141] libmachine: (multinode-483638) Calling .GetSSHUsername
	I0925 19:12:46.521827   39391 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19681-6065/.minikube/machines/multinode-483638/id_rsa Username:docker}
	I0925 19:12:46.605073   39391 ssh_runner.go:195] Run: systemctl --version
	I0925 19:12:46.611734   39391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0925 19:12:46.627718   39391 kubeconfig.go:125] found "multinode-483638" server: "https://192.168.39.227:8443"
	I0925 19:12:46.627753   39391 api_server.go:166] Checking apiserver status ...
	I0925 19:12:46.627785   39391 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 19:12:46.641667   39391 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1931/cgroup
	W0925 19:12:46.652012   39391 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1931/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0925 19:12:46.652066   39391 ssh_runner.go:195] Run: ls
	I0925 19:12:46.657055   39391 api_server.go:253] Checking apiserver healthz at https://192.168.39.227:8443/healthz ...
	I0925 19:12:46.662236   39391 api_server.go:279] https://192.168.39.227:8443/healthz returned 200:
	ok
	I0925 19:12:46.662266   39391 status.go:456] multinode-483638 apiserver status = Running (err=<nil>)
	I0925 19:12:46.662292   39391 status.go:176] multinode-483638 status: &{Name:multinode-483638 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0925 19:12:46.662330   39391 status.go:174] checking status of multinode-483638-m02 ...
	I0925 19:12:46.662630   39391 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 19:12:46.662674   39391 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 19:12:46.677486   39391 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38613
	I0925 19:12:46.677955   39391 main.go:141] libmachine: () Calling .GetVersion
	I0925 19:12:46.678487   39391 main.go:141] libmachine: Using API Version  1
	I0925 19:12:46.678518   39391 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 19:12:46.678807   39391 main.go:141] libmachine: () Calling .GetMachineName
	I0925 19:12:46.678976   39391 main.go:141] libmachine: (multinode-483638-m02) Calling .GetState
	I0925 19:12:46.680269   39391 status.go:364] multinode-483638-m02 host status = "Running" (err=<nil>)
	I0925 19:12:46.680284   39391 host.go:66] Checking if "multinode-483638-m02" exists ...
	I0925 19:12:46.680559   39391 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 19:12:46.680588   39391 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 19:12:46.695214   39391 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40673
	I0925 19:12:46.695661   39391 main.go:141] libmachine: () Calling .GetVersion
	I0925 19:12:46.696045   39391 main.go:141] libmachine: Using API Version  1
	I0925 19:12:46.696067   39391 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 19:12:46.696372   39391 main.go:141] libmachine: () Calling .GetMachineName
	I0925 19:12:46.696565   39391 main.go:141] libmachine: (multinode-483638-m02) Calling .GetIP
	I0925 19:12:46.699222   39391 main.go:141] libmachine: (multinode-483638-m02) DBG | domain multinode-483638-m02 has defined MAC address 52:54:00:c8:be:39 in network mk-multinode-483638
	I0925 19:12:46.699603   39391 main.go:141] libmachine: (multinode-483638-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:be:39", ip: ""} in network mk-multinode-483638: {Iface:virbr1 ExpiryTime:2024-09-25 20:10:51 +0000 UTC Type:0 Mac:52:54:00:c8:be:39 Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-483638-m02 Clientid:01:52:54:00:c8:be:39}
	I0925 19:12:46.699626   39391 main.go:141] libmachine: (multinode-483638-m02) DBG | domain multinode-483638-m02 has defined IP address 192.168.39.83 and MAC address 52:54:00:c8:be:39 in network mk-multinode-483638
	I0925 19:12:46.699752   39391 host.go:66] Checking if "multinode-483638-m02" exists ...
	I0925 19:12:46.700101   39391 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 19:12:46.700143   39391 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 19:12:46.714990   39391 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36513
	I0925 19:12:46.715374   39391 main.go:141] libmachine: () Calling .GetVersion
	I0925 19:12:46.715821   39391 main.go:141] libmachine: Using API Version  1
	I0925 19:12:46.715842   39391 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 19:12:46.716108   39391 main.go:141] libmachine: () Calling .GetMachineName
	I0925 19:12:46.716318   39391 main.go:141] libmachine: (multinode-483638-m02) Calling .DriverName
	I0925 19:12:46.716518   39391 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0925 19:12:46.716541   39391 main.go:141] libmachine: (multinode-483638-m02) Calling .GetSSHHostname
	I0925 19:12:46.719160   39391 main.go:141] libmachine: (multinode-483638-m02) DBG | domain multinode-483638-m02 has defined MAC address 52:54:00:c8:be:39 in network mk-multinode-483638
	I0925 19:12:46.719548   39391 main.go:141] libmachine: (multinode-483638-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:be:39", ip: ""} in network mk-multinode-483638: {Iface:virbr1 ExpiryTime:2024-09-25 20:10:51 +0000 UTC Type:0 Mac:52:54:00:c8:be:39 Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-483638-m02 Clientid:01:52:54:00:c8:be:39}
	I0925 19:12:46.719581   39391 main.go:141] libmachine: (multinode-483638-m02) DBG | domain multinode-483638-m02 has defined IP address 192.168.39.83 and MAC address 52:54:00:c8:be:39 in network mk-multinode-483638
	I0925 19:12:46.719741   39391 main.go:141] libmachine: (multinode-483638-m02) Calling .GetSSHPort
	I0925 19:12:46.719896   39391 main.go:141] libmachine: (multinode-483638-m02) Calling .GetSSHKeyPath
	I0925 19:12:46.720024   39391 main.go:141] libmachine: (multinode-483638-m02) Calling .GetSSHUsername
	I0925 19:12:46.720140   39391 sshutil.go:53] new ssh client: &{IP:192.168.39.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19681-6065/.minikube/machines/multinode-483638-m02/id_rsa Username:docker}
	I0925 19:12:46.801109   39391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0925 19:12:46.816865   39391 status.go:176] multinode-483638-m02 status: &{Name:multinode-483638-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0925 19:12:46.816896   39391 status.go:174] checking status of multinode-483638-m03 ...
	I0925 19:12:46.817255   39391 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 19:12:46.817303   39391 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 19:12:46.832962   39391 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41141
	I0925 19:12:46.833335   39391 main.go:141] libmachine: () Calling .GetVersion
	I0925 19:12:46.833818   39391 main.go:141] libmachine: Using API Version  1
	I0925 19:12:46.833840   39391 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 19:12:46.834155   39391 main.go:141] libmachine: () Calling .GetMachineName
	I0925 19:12:46.834333   39391 main.go:141] libmachine: (multinode-483638-m03) Calling .GetState
	I0925 19:12:46.835922   39391 status.go:364] multinode-483638-m03 host status = "Stopped" (err=<nil>)
	I0925 19:12:46.835936   39391 status.go:377] host is not running, skipping remaining checks
	I0925 19:12:46.835941   39391 status.go:176] multinode-483638-m03 status: &{Name:multinode-483638-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.43s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (42.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-483638 node start m03 -v=7 --alsologtostderr
E0925 19:13:22.760887   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/addons-608075/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-483638 node start m03 -v=7 --alsologtostderr: (41.758791704s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-483638 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (42.37s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (193.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-483638
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-483638
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-483638: (27.548788551s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-483638 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-483638 --wait=true -v=8 --alsologtostderr: (2m45.492454349s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-483638
--- PASS: TestMultiNode/serial/RestartKeepsNodes (193.13s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-483638 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-483638 node delete m03: (1.768611196s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-483638 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.30s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (25.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-483638 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-483638 stop: (24.972931195s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-483638 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-483638 status: exit status 7 (82.874427ms)

                                                
                                                
-- stdout --
	multinode-483638
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-483638-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-483638 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-483638 status --alsologtostderr: exit status 7 (83.249727ms)

                                                
                                                
-- stdout --
	multinode-483638
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-483638-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 19:17:09.738327   41230 out.go:345] Setting OutFile to fd 1 ...
	I0925 19:17:09.738455   41230 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 19:17:09.738465   41230 out.go:358] Setting ErrFile to fd 2...
	I0925 19:17:09.738469   41230 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 19:17:09.738662   41230 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19681-6065/.minikube/bin
	I0925 19:17:09.738824   41230 out.go:352] Setting JSON to false
	I0925 19:17:09.738848   41230 mustload.go:65] Loading cluster: multinode-483638
	I0925 19:17:09.738894   41230 notify.go:220] Checking for updates...
	I0925 19:17:09.739423   41230 config.go:182] Loaded profile config "multinode-483638": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0925 19:17:09.739450   41230 status.go:174] checking status of multinode-483638 ...
	I0925 19:17:09.739959   41230 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 19:17:09.740001   41230 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 19:17:09.758047   41230 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34085
	I0925 19:17:09.758447   41230 main.go:141] libmachine: () Calling .GetVersion
	I0925 19:17:09.759091   41230 main.go:141] libmachine: Using API Version  1
	I0925 19:17:09.759117   41230 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 19:17:09.759482   41230 main.go:141] libmachine: () Calling .GetMachineName
	I0925 19:17:09.759690   41230 main.go:141] libmachine: (multinode-483638) Calling .GetState
	I0925 19:17:09.761216   41230 status.go:364] multinode-483638 host status = "Stopped" (err=<nil>)
	I0925 19:17:09.761233   41230 status.go:377] host is not running, skipping remaining checks
	I0925 19:17:09.761240   41230 status.go:176] multinode-483638 status: &{Name:multinode-483638 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0925 19:17:09.761277   41230 status.go:174] checking status of multinode-483638-m02 ...
	I0925 19:17:09.761589   41230 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0925 19:17:09.761642   41230 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0925 19:17:09.776183   41230 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38175
	I0925 19:17:09.776535   41230 main.go:141] libmachine: () Calling .GetVersion
	I0925 19:17:09.776991   41230 main.go:141] libmachine: Using API Version  1
	I0925 19:17:09.777018   41230 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 19:17:09.777317   41230 main.go:141] libmachine: () Calling .GetMachineName
	I0925 19:17:09.777480   41230 main.go:141] libmachine: (multinode-483638-m02) Calling .GetState
	I0925 19:17:09.779102   41230 status.go:364] multinode-483638-m02 host status = "Stopped" (err=<nil>)
	I0925 19:17:09.779117   41230 status.go:377] host is not running, skipping remaining checks
	I0925 19:17:09.779124   41230 status.go:176] multinode-483638-m02 status: &{Name:multinode-483638-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (25.14s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (116.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-483638 --wait=true -v=8 --alsologtostderr --driver=kvm2 
E0925 19:17:13.771654   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/functional-641225/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:18:22.760359   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/addons-608075/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-483638 --wait=true -v=8 --alsologtostderr --driver=kvm2 : (1m56.216885068s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-483638 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (116.73s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (52.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-483638
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-483638-m02 --driver=kvm2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-483638-m02 --driver=kvm2 : exit status 14 (61.493038ms)

                                                
                                                
-- stdout --
	* [multinode-483638-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19681
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19681-6065/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19681-6065/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-483638-m02' is duplicated with machine name 'multinode-483638-m02' in profile 'multinode-483638'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-483638-m03 --driver=kvm2 
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-483638-m03 --driver=kvm2 : (50.923371833s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-483638
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-483638: exit status 80 (203.978178ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-483638 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-483638-m03 already exists in multinode-483638-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-483638-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (52.01s)

                                                
                                    
x
+
TestPreload (195.13s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-926452 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4
E0925 19:21:25.828554   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/addons-608075/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-926452 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4: (2m1.215861103s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-926452 image pull gcr.io/k8s-minikube/busybox
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-926452
E0925 19:22:13.770918   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/functional-641225/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-926452: (13.291767064s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-926452 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-926452 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 : (58.526095592s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-926452 image list
helpers_test.go:175: Cleaning up "test-preload-926452" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-926452
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-926452: (1.068926716s)
--- PASS: TestPreload (195.13s)

                                                
                                    
x
+
TestScheduledStopUnix (125.77s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-811182 --memory=2048 --driver=kvm2 
E0925 19:23:22.761752   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/addons-608075/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-811182 --memory=2048 --driver=kvm2 : (54.167467334s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-811182 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-811182 -n scheduled-stop-811182
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-811182 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0925 19:24:09.955156   13239 retry.go:31] will retry after 59.411µs: open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/scheduled-stop-811182/pid: no such file or directory
I0925 19:24:09.956383   13239 retry.go:31] will retry after 220.706µs: open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/scheduled-stop-811182/pid: no such file or directory
I0925 19:24:09.957546   13239 retry.go:31] will retry after 123.346µs: open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/scheduled-stop-811182/pid: no such file or directory
I0925 19:24:09.958706   13239 retry.go:31] will retry after 170.944µs: open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/scheduled-stop-811182/pid: no such file or directory
I0925 19:24:09.959857   13239 retry.go:31] will retry after 602.337µs: open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/scheduled-stop-811182/pid: no such file or directory
I0925 19:24:09.961023   13239 retry.go:31] will retry after 847.166µs: open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/scheduled-stop-811182/pid: no such file or directory
I0925 19:24:09.962145   13239 retry.go:31] will retry after 1.046425ms: open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/scheduled-stop-811182/pid: no such file or directory
I0925 19:24:09.963308   13239 retry.go:31] will retry after 1.853714ms: open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/scheduled-stop-811182/pid: no such file or directory
I0925 19:24:09.965530   13239 retry.go:31] will retry after 2.563822ms: open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/scheduled-stop-811182/pid: no such file or directory
I0925 19:24:09.968775   13239 retry.go:31] will retry after 4.309334ms: open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/scheduled-stop-811182/pid: no such file or directory
I0925 19:24:09.974017   13239 retry.go:31] will retry after 3.715561ms: open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/scheduled-stop-811182/pid: no such file or directory
I0925 19:24:09.978269   13239 retry.go:31] will retry after 5.508149ms: open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/scheduled-stop-811182/pid: no such file or directory
I0925 19:24:09.984505   13239 retry.go:31] will retry after 12.997321ms: open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/scheduled-stop-811182/pid: no such file or directory
I0925 19:24:09.997780   13239 retry.go:31] will retry after 17.752962ms: open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/scheduled-stop-811182/pid: no such file or directory
I0925 19:24:10.016050   13239 retry.go:31] will retry after 23.557804ms: open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/scheduled-stop-811182/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-811182 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-811182 -n scheduled-stop-811182
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-811182
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-811182 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0925 19:25:16.839543   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/functional-641225/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-811182
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-811182: exit status 7 (63.385282ms)

                                                
                                                
-- stdout --
	scheduled-stop-811182
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-811182 -n scheduled-stop-811182
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-811182 -n scheduled-stop-811182: exit status 7 (64.978685ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-811182" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-811182
--- PASS: TestScheduledStopUnix (125.77s)

                                                
                                    
x
+
TestSkaffold (132.71s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe3444838749 version
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-816010 --memory=2600 --driver=kvm2 
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-816010 --memory=2600 --driver=kvm2 : (50.979448045s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/KVM_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe3444838749 run --minikube-profile skaffold-816010 --kube-context skaffold-816010 --status-check=true --port-forward=false --interactive=false
E0925 19:27:13.771242   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/functional-641225/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe3444838749 run --minikube-profile skaffold-816010 --kube-context skaffold-816010 --status-check=true --port-forward=false --interactive=false: (1m8.67427812s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-5cdcbf7454-jsl7v" [9203fd8d-c4be-468d-a5de-c26297ecd546] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.003977111s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-85f94f64dd-th5bm" [e8dac127-65bb-447f-99a9-ebe1c19cf522] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.00462118s
helpers_test.go:175: Cleaning up "skaffold-816010" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-816010
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-816010: (1.194435918s)
--- PASS: TestSkaffold (132.71s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (215.4s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2324320647 start -p running-upgrade-735410 --memory=2200 --vm-driver=kvm2 
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2324320647 start -p running-upgrade-735410 --memory=2200 --vm-driver=kvm2 : (2m25.769855509s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-735410 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-735410 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (1m8.061663559s)
helpers_test.go:175: Cleaning up "running-upgrade-735410" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-735410
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-735410: (1.036072275s)
--- PASS: TestRunningBinaryUpgrade (215.40s)

                                                
                                    
x
+
TestKubernetesUpgrade (234.04s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-448175 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2 
E0925 19:28:22.760707   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/addons-608075/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-448175 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2 : (1m56.167295829s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-448175
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-448175: (12.858896394s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-448175 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-448175 status --format={{.Host}}: exit status 7 (64.488879ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-448175 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-448175 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2 : (53.472552875s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-448175 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-448175 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2 
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-448175 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2 : exit status 106 (111.146501ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-448175] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19681
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19681-6065/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19681-6065/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-448175
	    minikube start -p kubernetes-upgrade-448175 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4481752 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-448175 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-448175 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-448175 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2 : (50.173635276s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-448175" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-448175
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-448175: (1.104011767s)
--- PASS: TestKubernetesUpgrade (234.04s)

                                                
                                    
x
+
TestPause/serial/Start (122.79s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-846486 --memory=2048 --install-addons=false --wait=all --driver=kvm2 
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-846486 --memory=2048 --install-addons=false --wait=all --driver=kvm2 : (2m2.791480967s)
--- PASS: TestPause/serial/Start (122.79s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (84.18s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-846486 --alsologtostderr -v=1 --driver=kvm2 
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-846486 --alsologtostderr -v=1 --driver=kvm2 : (1m24.15320796s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (84.18s)

                                                
                                    
x
+
TestPause/serial/Pause (0.62s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-846486 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.62s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.27s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-846486 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-846486 --output=json --layout=cluster: exit status 2 (270.958336ms)

                                                
                                                
-- stdout --
	{"Name":"pause-846486","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-846486","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.27s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.62s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-846486 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.62s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.82s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-846486 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.82s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.14s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-846486 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-846486 --alsologtostderr -v=5: (1.135824592s)
--- PASS: TestPause/serial/DeletePaused (1.14s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (5.42s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (5.417111925s)
--- PASS: TestPause/serial/VerifyDeletedResources (5.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-668585 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-668585 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 : exit status 14 (76.923159ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-668585] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19681
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19681-6065/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19681-6065/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (89.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-668585 --driver=kvm2 
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-668585 --driver=kvm2 : (1m28.83077157s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-668585 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (89.15s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.44s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.44s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (158.74s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.541163133 start -p stopped-upgrade-197888 --memory=2200 --vm-driver=kvm2 
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.541163133 start -p stopped-upgrade-197888 --memory=2200 --vm-driver=kvm2 : (1m39.039202625s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.541163133 -p stopped-upgrade-197888 stop
E0925 19:33:22.760397   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/addons-608075/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.541163133 -p stopped-upgrade-197888 stop: (13.181045201s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-197888 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-197888 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (46.519553992s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (158.74s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (53.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-668585 --no-kubernetes --driver=kvm2 
E0925 19:33:02.671364   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/skaffold-816010/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-668585 --no-kubernetes --driver=kvm2 : (51.884383783s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-668585 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-668585 status -o json: exit status 2 (232.84267ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-668585","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-668585
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-668585: (1.094421531s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (53.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (59.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-668585 --no-kubernetes --driver=kvm2 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-668585 --no-kubernetes --driver=kvm2 : (59.33120702s)
--- PASS: TestNoKubernetes/serial/Start (59.33s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.12s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-197888
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-197888: (1.116792939s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-668585 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-668585 "sudo systemctl is-active --quiet service kubelet": exit status 1 (186.510958ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.94s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-668585
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-668585: (2.282369389s)
--- PASS: TestNoKubernetes/serial/Stop (2.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (61.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-668585 --driver=kvm2 
E0925 19:35:05.554802   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/skaffold-816010/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-668585 --driver=kvm2 : (1m1.129261622s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (61.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (78.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-797593 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-797593 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 : (1m18.971498018s)
--- PASS: TestNetworkPlugins/group/auto/Start (78.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-668585 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-668585 "sudo systemctl is-active --quiet service kubelet": exit status 1 (242.26283ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (107.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-797593 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-797593 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 : (1m47.207911746s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (107.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (135.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-797593 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-797593 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 : (2m15.400889869s)
--- PASS: TestNetworkPlugins/group/calico/Start (135.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-797593 "pgrep -a kubelet"
I0925 19:36:41.059007   13239 config.go:182] Loaded profile config "auto-797593": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (13.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-797593 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-7lxgf" [f45c887b-38f7-4eb9-9a9c-7bc8b413d169] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0925 19:36:41.886199   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/gvisor-745680/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:36:47.008285   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/gvisor-745680/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-7lxgf" [f45c887b-38f7-4eb9-9a9c-7bc8b413d169] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 13.00401438s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (13.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-797593 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-797593 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-797593 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (77.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-797593 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 
E0925 19:37:13.771196   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/functional-641225/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:37:17.732191   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/gvisor-745680/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:37:21.694925   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/skaffold-816010/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-797593 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 : (1m17.612274778s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (77.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-xs8j6" [9622f47c-3593-4850-a467-d8c7dc2cfe85] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004543841s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (74.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-797593 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-797593 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 : (1m14.938656497s)
--- PASS: TestNetworkPlugins/group/false/Start (74.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-797593 "pgrep -a kubelet"
I0925 19:37:34.647196   13239 config.go:182] Loaded profile config "kindnet-797593": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-797593 replace --force -f testdata/netcat-deployment.yaml
I0925 19:37:34.974699   13239 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-s77nz" [4d378784-ff94-4529-bc1f-85453ce5ba9d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-s77nz" [4d378784-ff94-4529-bc1f-85453ce5ba9d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.005230131s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-797593 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-797593 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-797593 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-bpctb" [b8667396-19fd-41e3-a43d-0dce0a67b7ed] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005895435s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (70.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-797593 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 
E0925 19:38:05.830142   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/addons-608075/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-797593 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 : (1m10.72169275s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (70.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-797593 "pgrep -a kubelet"
I0925 19:38:11.008244   13239 config.go:182] Loaded profile config "calico-797593": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (14.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-797593 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-lqqpt" [4afbcd98-4e41-43cb-aecb-8fbe09fd0180] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-lqqpt" [4afbcd98-4e41-43cb-aecb-8fbe09fd0180] Running
E0925 19:38:22.760830   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/addons-608075/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 14.005777699s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (14.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-797593 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-797593 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-797593 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-797593 "pgrep -a kubelet"
I0925 19:38:30.460926   13239 config.go:182] Loaded profile config "custom-flannel-797593": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-797593 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-7zwlh" [97e8c52f-9f55-424f-a1dc-16b24658bbf5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-7zwlh" [97e8c52f-9f55-424f-a1dc-16b24658bbf5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.004795212s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-797593 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-797593 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-797593 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (74.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-797593 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-797593 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 : (1m14.841019844s)
--- PASS: TestNetworkPlugins/group/flannel/Start (74.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-797593 "pgrep -a kubelet"
I0925 19:38:48.052389   13239 config.go:182] Loaded profile config "false-797593": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (12.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-797593 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-tj4sz" [7f27b5ff-807d-4f59-8495-2f70a1444139] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-tj4sz" [7f27b5ff-807d-4f59-8495-2f70a1444139] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 12.005386843s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (12.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-797593 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-797593 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-797593 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (86.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-797593 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-797593 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 : (1m26.715311556s)
--- PASS: TestNetworkPlugins/group/bridge/Start (86.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-797593 "pgrep -a kubelet"
I0925 19:39:15.851329   13239 config.go:182] Loaded profile config "enable-default-cni-797593": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-797593 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-mvmb7" [319f318b-01af-4c5b-a769-a20f516a8b7e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-mvmb7" [319f318b-01af-4c5b-a769-a20f516a8b7e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.004286147s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (130.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-797593 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 
E0925 19:39:20.616223   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/gvisor-745680/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-797593 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 : (2m10.304661696s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (130.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-797593 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-797593 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-797593 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (179.57s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-482198 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-482198 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0: (2m59.570102317s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (179.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-nxs8w" [5578e1b5-95a7-444c-839a-f3845a7a1505] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004416169s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-797593 "pgrep -a kubelet"
I0925 19:40:05.868915   13239 config.go:182] Loaded profile config "flannel-797593": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-797593 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-8xk6w" [bdc8eda7-2fb3-401d-8334-7143f68dc67d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-8xk6w" [bdc8eda7-2fb3-401d-8334-7143f68dc67d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.006403987s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-797593 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-797593 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-797593 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-797593 "pgrep -a kubelet"
I0925 19:40:28.246358   13239 config.go:182] Loaded profile config "bridge-797593": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-797593 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-x6x7p" [20bb6d9a-d439-40a6-a934-de856ad82790] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-x6x7p" [20bb6d9a-d439-40a6-a934-de856ad82790] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.004745168s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (107.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-048668 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-048668 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.31.1: (1m47.153962835s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (107.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-797593 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-797593 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-797593 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (77.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-652509 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-652509 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.31.1: (1m17.302069978s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (77.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-797593 "pgrep -a kubelet"
I0925 19:41:28.236882   13239 config.go:182] Loaded profile config "kubenet-797593": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-797593 replace --force -f testdata/netcat-deployment.yaml
I0925 19:41:28.505151   13239 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I0925 19:41:28.510353   13239 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-gxw8m" [6e35a33b-b107-4b58-911c-59c36f266d08] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-gxw8m" [6e35a33b-b107-4b58-911c-59c36f266d08] Running
E0925 19:41:36.753186   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/gvisor-745680/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 10.006150993s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-797593 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-797593 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-797593 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.19s)
E0925 19:48:22.760501   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/addons-608075/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:48:30.701510   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/custom-flannel-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:48:32.441735   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/calico-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:48:44.758409   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/skaffold-816010/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:48:48.287476   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/false-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:48:58.405185   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/custom-flannel-797593/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (68.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-874507 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.31.1
E0925 19:41:56.840898   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/functional-641225/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:42:01.837381   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/auto-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:42:04.457764   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/gvisor-745680/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:42:13.771221   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/functional-641225/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-874507 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.31.1: (1m8.0734014s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (68.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-652509 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [20448a3c-7c78-44d6-9438-74ed6e7fbcb0] Pending
helpers_test.go:344: "busybox" [20448a3c-7c78-44d6-9438-74ed6e7fbcb0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [20448a3c-7c78-44d6-9438-74ed6e7fbcb0] Running
E0925 19:42:21.695260   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/skaffold-816010/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:42:22.319395   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/auto-797593/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.004639723s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-652509 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-048668 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [8f473bdb-9e72-40d3-82f0-3014a96ff243] Pending
helpers_test.go:344: "busybox" [8f473bdb-9e72-40d3-82f0-3014a96ff243] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [8f473bdb-9e72-40d3-82f0-3014a96ff243] Running
E0925 19:42:28.437860   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/kindnet-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:42:28.444252   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/kindnet-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:42:28.455603   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/kindnet-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:42:28.476995   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/kindnet-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:42:28.518771   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/kindnet-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:42:28.601032   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/kindnet-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:42:28.762618   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/kindnet-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:42:29.084351   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/kindnet-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:42:29.726388   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/kindnet-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:42:31.008652   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/kindnet-797593/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.005413377s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-048668 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-652509 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-652509 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (13.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-652509 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-652509 --alsologtostderr -v=3: (13.398026715s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (13.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.76s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-048668 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0925 19:42:33.570156   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/kindnet-797593/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-048668 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.642484873s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-048668 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.76s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (13.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-048668 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-048668 --alsologtostderr -v=3: (13.368909133s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (13.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-652509 -n embed-certs-652509
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-652509 -n embed-certs-652509: exit status 7 (62.151525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-652509 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0925 19:42:38.692012   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/kindnet-797593/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (306.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-652509 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-652509 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.31.1: (5m6.026885884s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-652509 -n embed-certs-652509
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (306.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.59s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-482198 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d4f89626-42d8-4426-b983-cabcdee9b552] Pending
helpers_test.go:344: "busybox" [d4f89626-42d8-4426-b983-cabcdee9b552] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d4f89626-42d8-4426-b983-cabcdee9b552] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.004972175s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-482198 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.59s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-048668 -n no-preload-048668
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-048668 -n no-preload-048668: exit status 7 (82.027231ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-048668 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (315.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-048668 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.31.1
E0925 19:42:48.933731   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/kindnet-797593/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-048668 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.31.1: (5m14.825478242s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-048668 -n no-preload-048668
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (315.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-482198 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-482198 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-482198 --alsologtostderr -v=3
E0925 19:43:03.280891   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/auto-797593/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-482198 --alsologtostderr -v=3: (13.348781734s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-874507 create -f testdata/busybox.yaml
E0925 19:43:04.734962   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/calico-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:43:04.741734   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/calico-797593/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
E0925 19:43:04.753568   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/calico-797593/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [f3d37323-2b38-4644-ada5-861fca22237d] Pending
E0925 19:43:04.775867   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/calico-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:43:04.817457   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/calico-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:43:04.899053   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/calico-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:43:05.061094   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/calico-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:43:05.383151   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/calico-797593/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [f3d37323-2b38-4644-ada5-861fca22237d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0925 19:43:06.025374   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/calico-797593/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [f3d37323-2b38-4644-ada5-861fca22237d] Running
E0925 19:43:07.307430   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/calico-797593/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 7.00465341s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-874507 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-482198 -n old-k8s-version-482198
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-482198 -n old-k8s-version-482198: exit status 7 (66.022721ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-482198 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (425.96s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-482198 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0
E0925 19:43:09.415157   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/kindnet-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:43:09.869698   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/calico-797593/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-482198 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0: (7m5.711038411s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-482198 -n old-k8s-version-482198
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (425.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-874507 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-874507 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.016441524s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-874507 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (13.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-874507 --alsologtostderr -v=3
E0925 19:43:14.991966   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/calico-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:43:22.760344   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/addons-608075/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:43:25.233892   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/calico-797593/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-874507 --alsologtostderr -v=3: (13.38111494s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (13.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-874507 -n default-k8s-diff-port-874507
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-874507 -n default-k8s-diff-port-874507: exit status 7 (64.229777ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-874507 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (336.84s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-874507 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.31.1
E0925 19:43:30.701805   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/custom-flannel-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:43:30.708283   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/custom-flannel-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:43:30.719733   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/custom-flannel-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:43:30.741233   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/custom-flannel-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:43:30.782672   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/custom-flannel-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:43:30.864171   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/custom-flannel-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:43:31.025897   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/custom-flannel-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:43:31.347782   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/custom-flannel-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:43:31.989997   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/custom-flannel-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:43:33.271916   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/custom-flannel-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:43:35.833708   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/custom-flannel-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:43:40.955999   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/custom-flannel-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:43:45.716342   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/calico-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:43:48.286963   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/false-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:43:48.293546   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/false-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:43:48.305042   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/false-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:43:48.326987   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/false-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:43:48.369153   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/false-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:43:48.450814   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/false-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:43:48.612633   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/false-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:43:48.934341   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/false-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:43:49.576220   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/false-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:43:50.376427   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/kindnet-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:43:50.857839   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/false-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:43:51.197270   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/custom-flannel-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:43:53.419530   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/false-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:43:58.541739   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/false-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:44:08.783677   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/false-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:44:11.679226   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/custom-flannel-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:44:16.167826   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/enable-default-cni-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:44:16.174207   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/enable-default-cni-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:44:16.185616   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/enable-default-cni-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:44:16.207803   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/enable-default-cni-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:44:16.249199   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/enable-default-cni-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:44:16.330647   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/enable-default-cni-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:44:16.492224   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/enable-default-cni-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:44:16.813795   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/enable-default-cni-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:44:17.455405   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/enable-default-cni-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:44:18.737424   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/enable-default-cni-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:44:21.299606   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/enable-default-cni-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:44:25.203028   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/auto-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:44:26.421452   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/enable-default-cni-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:44:26.678207   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/calico-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:44:29.265903   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/false-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:44:36.663527   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/enable-default-cni-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:44:52.641589   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/custom-flannel-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:44:57.144831   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/enable-default-cni-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:44:59.641109   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/flannel-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:44:59.647438   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/flannel-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:44:59.658876   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/flannel-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:44:59.680332   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/flannel-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:44:59.721747   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/flannel-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:44:59.803189   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/flannel-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:44:59.964737   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/flannel-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:45:00.286046   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/flannel-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:45:00.928069   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/flannel-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:45:02.209347   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/flannel-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:45:04.770980   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/flannel-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:45:09.892794   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/flannel-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:45:10.227294   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/false-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:45:12.298374   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/kindnet-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:45:20.134142   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/flannel-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:45:28.541295   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/bridge-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:45:28.547761   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/bridge-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:45:28.559223   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/bridge-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:45:28.580689   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/bridge-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:45:28.622091   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/bridge-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:45:28.703551   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/bridge-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:45:28.865541   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/bridge-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:45:29.187589   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/bridge-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:45:29.829532   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/bridge-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:45:31.111623   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/bridge-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:45:33.673617   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/bridge-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:45:38.106907   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/enable-default-cni-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:45:38.795463   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/bridge-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:45:40.616158   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/flannel-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:45:48.600222   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/calico-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:45:49.037331   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/bridge-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:46:09.518682   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/bridge-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:46:14.563555   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/custom-flannel-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:46:21.577633   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/flannel-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:46:28.494032   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/kubenet-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:46:28.500487   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/kubenet-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:46:28.511892   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/kubenet-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:46:28.533337   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/kubenet-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:46:28.574957   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/kubenet-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:46:28.656445   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/kubenet-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:46:28.818013   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/kubenet-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:46:29.140324   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/kubenet-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:46:29.782412   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/kubenet-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:46:31.064615   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/kubenet-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:46:32.149555   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/false-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:46:33.626566   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/kubenet-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:46:36.753298   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/gvisor-745680/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:46:38.748638   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/kubenet-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:46:41.343587   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/auto-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:46:48.990529   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/kubenet-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:46:50.479967   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/bridge-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:47:00.028856   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/enable-default-cni-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:47:09.045318   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/auto-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:47:09.471903   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/kubenet-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:47:13.770882   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/functional-641225/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:47:21.694258   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/skaffold-816010/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:47:28.437227   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/kindnet-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:47:43.499730   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/flannel-797593/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-874507 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.31.1: (5m36.511751778s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-874507 -n default-k8s-diff-port-874507
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (336.84s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-68fgg" [1e2ab2c3-1b49-44bd-b47f-ca5a428f4683] Running
E0925 19:47:50.433719   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/kubenet-797593/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004530184s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-68fgg" [1e2ab2c3-1b49-44bd-b47f-ca5a428f4683] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004478553s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-652509 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-652509 image list --format=json
E0925 19:47:56.139756   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/kindnet-797593/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.53s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-652509 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-652509 -n embed-certs-652509
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-652509 -n embed-certs-652509: exit status 2 (245.590203ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-652509 -n embed-certs-652509
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-652509 -n embed-certs-652509: exit status 2 (252.063986ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-652509 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-652509 -n embed-certs-652509
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-652509 -n embed-certs-652509
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.53s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (63.83s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-662755 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-662755 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.31.1: (1m3.830531787s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (63.83s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-zdl2r" [31c95e04-a2d2-4364-bd49-c529676d1cb0] Running
E0925 19:48:04.734719   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/calico-797593/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00489946s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-zdl2r" [31c95e04-a2d2-4364-bd49-c529676d1cb0] Running
E0925 19:48:12.401909   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/bridge-797593/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005851937s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-048668 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-048668 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.55s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-048668 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-048668 -n no-preload-048668
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-048668 -n no-preload-048668: exit status 2 (251.871661ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-048668 -n no-preload-048668
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-048668 -n no-preload-048668: exit status 2 (243.810242ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-048668 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-048668 -n no-preload-048668
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-048668 -n no-preload-048668
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.55s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-cf2kp" [a7cd057c-c11b-4156-bcdd-c8cea652edef] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-695b96c756-cf2kp" [a7cd057c-c11b-4156-bcdd-c8cea652edef] Running
E0925 19:49:12.355050   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/kubenet-797593/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.004023625s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (12.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.97s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-662755 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.97s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (13.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-662755 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-662755 --alsologtostderr -v=3: (13.344557611s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (13.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-cf2kp" [a7cd057c-c11b-4156-bcdd-c8cea652edef] Running
E0925 19:49:15.991290   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/false-797593/client.crt: no such file or directory" logger="UnhandledError"
E0925 19:49:16.167901   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/enable-default-cni-797593/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004970407s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-874507 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-662755 -n newest-cni-662755
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-662755 -n newest-cni-662755: exit status 7 (63.15112ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-662755 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (39.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-662755 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-662755 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.31.1: (38.950914949s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-662755 -n newest-cni-662755
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (39.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-874507 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.61s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-874507 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-874507 -n default-k8s-diff-port-874507
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-874507 -n default-k8s-diff-port-874507: exit status 2 (252.706145ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-874507 -n default-k8s-diff-port-874507
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-874507 -n default-k8s-diff-port-874507: exit status 2 (259.121946ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-874507 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-874507 -n default-k8s-diff-port-874507
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-874507 -n default-k8s-diff-port-874507
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.61s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-662755 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.78s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-662755 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-662755 -n newest-cni-662755
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-662755 -n newest-cni-662755: exit status 2 (293.673017ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-662755 -n newest-cni-662755
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-662755 -n newest-cni-662755: exit status 2 (274.020142ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-662755 --alsologtostderr -v=1
E0925 19:49:59.640883   13239 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19681-6065/.minikube/profiles/flannel-797593/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-662755 -n newest-cni-662755
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-662755 -n newest-cni-662755
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.78s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-5j4ht" [05c8a0ba-6dcc-4f36-a84b-cd17ab73b216] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003924341s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-5j4ht" [05c8a0ba-6dcc-4f36-a84b-cd17ab73b216] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003262118s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-482198 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-482198 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-482198 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-482198 -n old-k8s-version-482198
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-482198 -n old-k8s-version-482198: exit status 2 (237.70782ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-482198 -n old-k8s-version-482198
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-482198 -n old-k8s-version-482198: exit status 2 (238.637303ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-482198 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-482198 -n old-k8s-version-482198
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-482198 -n old-k8s-version-482198
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.38s)

                                                
                                    

Test skip (31/340)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:438: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-797593 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-797593

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-797593

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-797593

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-797593

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-797593

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-797593

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-797593

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-797593

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-797593

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-797593

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-797593" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797593"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-797593" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797593"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-797593" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797593"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-797593

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-797593" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797593"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-797593" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797593"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-797593" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-797593" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-797593" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-797593" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-797593" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-797593" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-797593" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-797593" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-797593" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797593"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-797593" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797593"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-797593" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797593"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-797593" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797593"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-797593" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797593"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-797593

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-797593

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-797593" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-797593" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-797593

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-797593

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-797593" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-797593" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-797593" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-797593" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-797593" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-797593" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797593"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-797593" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797593"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-797593" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797593"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-797593" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797593"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-797593" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797593"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-797593

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-797593" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797593"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-797593" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797593"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-797593" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797593"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-797593" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797593"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-797593" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797593"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-797593" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797593"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-797593" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797593"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-797593" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797593"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-797593" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797593"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-797593" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797593"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-797593" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797593"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-797593" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797593"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-797593" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797593"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-797593" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797593"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-797593" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797593"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-797593" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797593"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-797593" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797593"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-797593" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797593"

                                                
                                                
----------------------- debugLogs end: cilium-797593 [took: 3.408049376s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-797593" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-797593
--- SKIP: TestNetworkPlugins/group/cilium (3.57s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-825512" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-825512
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
Copied to clipboard